00:00:00.001 Started by upstream project "autotest-spdk-v24.05-vs-dpdk-v23.11" build number 90 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3268 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.088 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.088 The recommended git tool is: git 00:00:00.089 using credential 00000000-0000-0000-0000-000000000002 00:00:00.091 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.118 Fetching changes from the remote Git repository 00:00:00.122 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.146 Using shallow fetch with depth 1 00:00:00.146 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.146 > git --version # timeout=10 00:00:00.178 > git --version # 'git version 2.39.2' 00:00:00.178 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.197 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.197 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.312 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.321 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.331 Checking out Revision 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d (FETCH_HEAD) 00:00:05.331 > git config core.sparsecheckout # timeout=10 00:00:05.341 > git read-tree -mu HEAD # timeout=10 00:00:05.357 > git checkout -f 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=5 00:00:05.374 Commit message: "inventory: add WCP3 to free inventory" 00:00:05.374 > git rev-list --no-walk 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=10 00:00:05.454 [Pipeline] Start of Pipeline 00:00:05.466 [Pipeline] library 00:00:05.468 Loading library shm_lib@master 00:00:05.468 Library shm_lib@master is cached. Copying from home. 00:00:05.480 [Pipeline] node 00:00:05.488 Running on VM-host-SM0 in /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:00:05.490 [Pipeline] { 00:00:05.499 [Pipeline] catchError 00:00:05.500 [Pipeline] { 00:00:05.510 [Pipeline] wrap 00:00:05.518 [Pipeline] { 00:00:05.524 [Pipeline] stage 00:00:05.525 [Pipeline] { (Prologue) 00:00:05.539 [Pipeline] echo 00:00:05.540 Node: VM-host-SM0 00:00:05.545 [Pipeline] cleanWs 00:00:05.553 [WS-CLEANUP] Deleting project workspace... 00:00:05.553 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.558 [WS-CLEANUP] done 00:00:05.742 [Pipeline] setCustomBuildProperty 00:00:05.827 [Pipeline] httpRequest 00:00:05.841 [Pipeline] echo 00:00:05.842 Sorcerer 10.211.164.101 is alive 00:00:05.848 [Pipeline] httpRequest 00:00:05.853 HttpMethod: GET 00:00:05.853 URL: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:05.854 Sending request to url: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:05.867 Response Code: HTTP/1.1 200 OK 00:00:05.868 Success: Status code 200 is in the accepted range: 200,404 00:00:05.869 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:09.347 [Pipeline] sh 00:00:09.632 + tar --no-same-owner -xf jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:09.647 [Pipeline] httpRequest 00:00:09.671 [Pipeline] echo 00:00:09.673 Sorcerer 10.211.164.101 is alive 00:00:09.681 [Pipeline] httpRequest 00:00:09.685 HttpMethod: GET 00:00:09.686 URL: http://10.211.164.101/packages/spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:00:09.686 Sending request to url: http://10.211.164.101/packages/spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:00:09.696 Response Code: HTTP/1.1 200 OK 00:00:09.697 Success: Status code 200 is in the accepted range: 200,404 00:00:09.697 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:00:59.593 [Pipeline] sh 00:00:59.874 + tar --no-same-owner -xf spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:01:02.451 [Pipeline] sh 00:01:02.734 + git -C spdk log --oneline -n5 00:01:02.734 5fa2f5086 nvme: add lock_depth for ctrlr_lock 00:01:02.734 330a4f94d nvme: check pthread_mutex_destroy() return value 00:01:02.734 7b72c3ced nvme: add nvme_ctrlr_lock 00:01:02.734 fc7a37019 nvme: always use nvme_robust_mutex_lock for ctrlr_lock 00:01:02.734 3e04ecdd1 bdev_nvme: use spdk_nvme_ctrlr_fail() on ctrlr_loss_timeout 00:01:02.754 [Pipeline] withCredentials 00:01:02.764 > git --version # timeout=10 00:01:02.775 > git --version # 'git version 2.39.2' 00:01:02.790 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:02.792 [Pipeline] { 00:01:02.801 [Pipeline] retry 00:01:02.803 [Pipeline] { 00:01:02.820 [Pipeline] sh 00:01:03.099 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:01:05.026 [Pipeline] } 00:01:05.048 [Pipeline] // retry 00:01:05.055 [Pipeline] } 00:01:05.076 [Pipeline] // withCredentials 00:01:05.086 [Pipeline] httpRequest 00:01:05.104 [Pipeline] echo 00:01:05.106 Sorcerer 10.211.164.101 is alive 00:01:05.115 [Pipeline] httpRequest 00:01:05.119 HttpMethod: GET 00:01:05.120 URL: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:05.120 Sending request to url: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:05.121 Response Code: HTTP/1.1 200 OK 00:01:05.122 Success: Status code 200 is in the accepted range: 200,404 00:01:05.122 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:11.304 [Pipeline] sh 00:01:11.583 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:12.984 [Pipeline] sh 00:01:13.259 + git -C dpdk log --oneline -n5 00:01:13.260 eeb0605f11 version: 23.11.0 00:01:13.260 238778122a doc: update release notes for 23.11 00:01:13.260 46aa6b3cfc doc: fix description of RSS features 00:01:13.260 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:13.260 7e421ae345 devtools: support skipping forbid rule check 00:01:13.276 [Pipeline] writeFile 00:01:13.292 [Pipeline] sh 00:01:13.570 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:13.581 [Pipeline] sh 00:01:13.858 + cat autorun-spdk.conf 00:01:13.858 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:13.858 SPDK_TEST_NVMF=1 00:01:13.858 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:13.858 SPDK_TEST_USDT=1 00:01:13.858 SPDK_RUN_UBSAN=1 00:01:13.858 SPDK_TEST_NVMF_MDNS=1 00:01:13.858 NET_TYPE=virt 00:01:13.858 SPDK_JSONRPC_GO_CLIENT=1 00:01:13.858 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:13.858 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:13.858 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:13.864 RUN_NIGHTLY=1 00:01:13.866 [Pipeline] } 00:01:13.883 [Pipeline] // stage 00:01:13.898 [Pipeline] stage 00:01:13.900 [Pipeline] { (Run VM) 00:01:13.913 [Pipeline] sh 00:01:14.190 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:14.190 + echo 'Start stage prepare_nvme.sh' 00:01:14.190 Start stage prepare_nvme.sh 00:01:14.190 + [[ -n 5 ]] 00:01:14.190 + disk_prefix=ex5 00:01:14.190 + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest ]] 00:01:14.190 + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf ]] 00:01:14.190 + source /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf 00:01:14.190 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:14.190 ++ SPDK_TEST_NVMF=1 00:01:14.190 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:14.190 ++ SPDK_TEST_USDT=1 00:01:14.190 ++ SPDK_RUN_UBSAN=1 00:01:14.190 ++ SPDK_TEST_NVMF_MDNS=1 00:01:14.190 ++ NET_TYPE=virt 00:01:14.190 ++ SPDK_JSONRPC_GO_CLIENT=1 00:01:14.190 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:14.190 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:14.190 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:14.190 ++ RUN_NIGHTLY=1 00:01:14.190 + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:14.190 + nvme_files=() 00:01:14.190 + declare -A nvme_files 00:01:14.190 + backend_dir=/var/lib/libvirt/images/backends 00:01:14.190 + nvme_files['nvme.img']=5G 00:01:14.190 + nvme_files['nvme-cmb.img']=5G 00:01:14.190 + nvme_files['nvme-multi0.img']=4G 00:01:14.190 + nvme_files['nvme-multi1.img']=4G 00:01:14.190 + nvme_files['nvme-multi2.img']=4G 00:01:14.190 + nvme_files['nvme-openstack.img']=8G 00:01:14.190 + nvme_files['nvme-zns.img']=5G 00:01:14.190 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:14.190 + (( SPDK_TEST_FTL == 1 )) 00:01:14.191 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:14.191 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:14.191 + for nvme in "${!nvme_files[@]}" 00:01:14.191 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi2.img -s 4G 00:01:14.191 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:14.191 + for nvme in "${!nvme_files[@]}" 00:01:14.191 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-cmb.img -s 5G 00:01:14.191 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:14.191 + for nvme in "${!nvme_files[@]}" 00:01:14.191 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-openstack.img -s 8G 00:01:14.191 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:14.191 + for nvme in "${!nvme_files[@]}" 00:01:14.191 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-zns.img -s 5G 00:01:14.191 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:14.191 + for nvme in "${!nvme_files[@]}" 00:01:14.191 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi1.img -s 4G 00:01:14.191 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:14.191 + for nvme in "${!nvme_files[@]}" 00:01:14.191 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi0.img -s 4G 00:01:14.448 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:14.448 + for nvme in "${!nvme_files[@]}" 00:01:14.448 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme.img -s 5G 00:01:14.448 Formatting '/var/lib/libvirt/images/backends/ex5-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:14.448 ++ sudo grep -rl ex5-nvme.img /etc/libvirt/qemu 00:01:14.448 + echo 'End stage prepare_nvme.sh' 00:01:14.448 End stage prepare_nvme.sh 00:01:14.459 [Pipeline] sh 00:01:14.737 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:14.737 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex5-nvme.img -b /var/lib/libvirt/images/backends/ex5-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img -H -a -v -f fedora38 00:01:14.737 00:01:14.737 DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant 00:01:14.737 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk 00:01:14.737 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:14.737 HELP=0 00:01:14.737 DRY_RUN=0 00:01:14.737 NVME_FILE=/var/lib/libvirt/images/backends/ex5-nvme.img,/var/lib/libvirt/images/backends/ex5-nvme-multi0.img, 00:01:14.737 NVME_DISKS_TYPE=nvme,nvme, 00:01:14.737 NVME_AUTO_CREATE=0 00:01:14.737 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img, 00:01:14.737 NVME_CMB=,, 00:01:14.737 NVME_PMR=,, 00:01:14.737 NVME_ZNS=,, 00:01:14.737 NVME_MS=,, 00:01:14.737 NVME_FDP=,, 00:01:14.737 SPDK_VAGRANT_DISTRO=fedora38 00:01:14.737 SPDK_VAGRANT_VMCPU=10 00:01:14.737 SPDK_VAGRANT_VMRAM=12288 00:01:14.737 SPDK_VAGRANT_PROVIDER=libvirt 00:01:14.737 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:14.737 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:14.737 SPDK_OPENSTACK_NETWORK=0 00:01:14.737 VAGRANT_PACKAGE_BOX=0 00:01:14.737 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:14.737 FORCE_DISTRO=true 00:01:14.737 VAGRANT_BOX_VERSION= 00:01:14.737 EXTRA_VAGRANTFILES= 00:01:14.737 NIC_MODEL=e1000 00:01:14.737 00:01:14.737 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt' 00:01:14.737 /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:18.028 Bringing machine 'default' up with 'libvirt' provider... 00:01:18.599 ==> default: Creating image (snapshot of base box volume). 00:01:18.599 ==> default: Creating domain with the following settings... 00:01:18.599 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1720987267_54cec19146b8bd48c83c 00:01:18.599 ==> default: -- Domain type: kvm 00:01:18.599 ==> default: -- Cpus: 10 00:01:18.599 ==> default: -- Feature: acpi 00:01:18.599 ==> default: -- Feature: apic 00:01:18.599 ==> default: -- Feature: pae 00:01:18.599 ==> default: -- Memory: 12288M 00:01:18.599 ==> default: -- Memory Backing: hugepages: 00:01:18.599 ==> default: -- Management MAC: 00:01:18.599 ==> default: -- Loader: 00:01:18.599 ==> default: -- Nvram: 00:01:18.599 ==> default: -- Base box: spdk/fedora38 00:01:18.599 ==> default: -- Storage pool: default 00:01:18.599 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1720987267_54cec19146b8bd48c83c.img (20G) 00:01:18.599 ==> default: -- Volume Cache: default 00:01:18.599 ==> default: -- Kernel: 00:01:18.599 ==> default: -- Initrd: 00:01:18.599 ==> default: -- Graphics Type: vnc 00:01:18.599 ==> default: -- Graphics Port: -1 00:01:18.599 ==> default: -- Graphics IP: 127.0.0.1 00:01:18.599 ==> default: -- Graphics Password: Not defined 00:01:18.599 ==> default: -- Video Type: cirrus 00:01:18.599 ==> default: -- Video VRAM: 9216 00:01:18.599 ==> default: -- Sound Type: 00:01:18.599 ==> default: -- Keymap: en-us 00:01:18.599 ==> default: -- TPM Path: 00:01:18.599 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:18.599 ==> default: -- Command line args: 00:01:18.599 ==> default: -> value=-device, 00:01:18.599 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:18.599 ==> default: -> value=-drive, 00:01:18.599 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme.img,if=none,id=nvme-0-drive0, 00:01:18.599 ==> default: -> value=-device, 00:01:18.599 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:18.599 ==> default: -> value=-device, 00:01:18.599 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:18.599 ==> default: -> value=-drive, 00:01:18.599 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:18.599 ==> default: -> value=-device, 00:01:18.599 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:18.599 ==> default: -> value=-drive, 00:01:18.599 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:18.599 ==> default: -> value=-device, 00:01:18.599 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:18.599 ==> default: -> value=-drive, 00:01:18.599 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:18.599 ==> default: -> value=-device, 00:01:18.599 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:18.868 ==> default: Creating shared folders metadata... 00:01:18.868 ==> default: Starting domain. 00:01:20.766 ==> default: Waiting for domain to get an IP address... 00:01:38.874 ==> default: Waiting for SSH to become available... 00:01:38.874 ==> default: Configuring and enabling network interfaces... 00:01:42.174 default: SSH address: 192.168.121.157:22 00:01:42.174 default: SSH username: vagrant 00:01:42.174 default: SSH auth method: private key 00:01:44.081 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:50.634 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:01:57.189 ==> default: Mounting SSHFS shared folder... 00:01:58.127 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:01:58.127 ==> default: Checking Mount.. 00:01:59.506 ==> default: Folder Successfully Mounted! 00:01:59.506 ==> default: Running provisioner: file... 00:02:00.438 default: ~/.gitconfig => .gitconfig 00:02:00.696 00:02:00.696 SUCCESS! 00:02:00.696 00:02:00.696 cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:02:00.696 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:00.696 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:02:00.696 00:02:00.705 [Pipeline] } 00:02:00.723 [Pipeline] // stage 00:02:00.733 [Pipeline] dir 00:02:00.733 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt 00:02:00.735 [Pipeline] { 00:02:00.750 [Pipeline] catchError 00:02:00.751 [Pipeline] { 00:02:00.766 [Pipeline] sh 00:02:01.060 + vagrant ssh-config --host vagrant 00:02:01.060 + sed -ne /^Host/,$p 00:02:01.060 + tee ssh_conf 00:02:04.358 Host vagrant 00:02:04.359 HostName 192.168.121.157 00:02:04.359 User vagrant 00:02:04.359 Port 22 00:02:04.359 UserKnownHostsFile /dev/null 00:02:04.359 StrictHostKeyChecking no 00:02:04.359 PasswordAuthentication no 00:02:04.359 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:02:04.359 IdentitiesOnly yes 00:02:04.359 LogLevel FATAL 00:02:04.359 ForwardAgent yes 00:02:04.359 ForwardX11 yes 00:02:04.359 00:02:04.372 [Pipeline] withEnv 00:02:04.373 [Pipeline] { 00:02:04.383 [Pipeline] sh 00:02:04.655 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:04.655 source /etc/os-release 00:02:04.655 [[ -e /image.version ]] && img=$(< /image.version) 00:02:04.655 # Minimal, systemd-like check. 00:02:04.655 if [[ -e /.dockerenv ]]; then 00:02:04.655 # Clear garbage from the node's name: 00:02:04.655 # agt-er_autotest_547-896 -> autotest_547-896 00:02:04.655 # $HOSTNAME is the actual container id 00:02:04.655 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:04.655 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:04.655 # We can assume this is a mount from a host where container is running, 00:02:04.655 # so fetch its hostname to easily identify the target swarm worker. 00:02:04.655 container="$(< /etc/hostname) ($agent)" 00:02:04.655 else 00:02:04.655 # Fallback 00:02:04.655 container=$agent 00:02:04.656 fi 00:02:04.656 fi 00:02:04.656 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:04.656 00:02:04.923 [Pipeline] } 00:02:04.944 [Pipeline] // withEnv 00:02:04.952 [Pipeline] setCustomBuildProperty 00:02:04.963 [Pipeline] stage 00:02:04.965 [Pipeline] { (Tests) 00:02:04.980 [Pipeline] sh 00:02:05.257 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:05.528 [Pipeline] sh 00:02:05.806 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:06.076 [Pipeline] timeout 00:02:06.076 Timeout set to expire in 40 min 00:02:06.078 [Pipeline] { 00:02:06.093 [Pipeline] sh 00:02:06.369 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:06.935 HEAD is now at 5fa2f5086 nvme: add lock_depth for ctrlr_lock 00:02:06.951 [Pipeline] sh 00:02:07.229 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:07.499 [Pipeline] sh 00:02:07.776 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:08.106 [Pipeline] sh 00:02:08.382 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-vg-autotest ./autoruner.sh spdk_repo 00:02:08.640 ++ readlink -f spdk_repo 00:02:08.640 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:08.640 + [[ -n /home/vagrant/spdk_repo ]] 00:02:08.640 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:08.640 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:08.640 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:08.640 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:08.640 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:08.640 + [[ nvmf-tcp-vg-autotest == pkgdep-* ]] 00:02:08.640 + cd /home/vagrant/spdk_repo 00:02:08.640 + source /etc/os-release 00:02:08.640 ++ NAME='Fedora Linux' 00:02:08.641 ++ VERSION='38 (Cloud Edition)' 00:02:08.641 ++ ID=fedora 00:02:08.641 ++ VERSION_ID=38 00:02:08.641 ++ VERSION_CODENAME= 00:02:08.641 ++ PLATFORM_ID=platform:f38 00:02:08.641 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:02:08.641 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:08.641 ++ LOGO=fedora-logo-icon 00:02:08.641 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:02:08.641 ++ HOME_URL=https://fedoraproject.org/ 00:02:08.641 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:02:08.641 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:08.641 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:08.641 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:08.641 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:02:08.641 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:08.641 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:02:08.641 ++ SUPPORT_END=2024-05-14 00:02:08.641 ++ VARIANT='Cloud Edition' 00:02:08.641 ++ VARIANT_ID=cloud 00:02:08.641 + uname -a 00:02:08.641 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:02:08.641 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:08.899 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:08.899 Hugepages 00:02:08.899 node hugesize free / total 00:02:08.899 node0 1048576kB 0 / 0 00:02:08.899 node0 2048kB 0 / 0 00:02:08.899 00:02:08.899 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:09.159 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:09.159 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:09.159 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:09.159 + rm -f /tmp/spdk-ld-path 00:02:09.159 + source autorun-spdk.conf 00:02:09.159 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:09.159 ++ SPDK_TEST_NVMF=1 00:02:09.159 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:09.159 ++ SPDK_TEST_USDT=1 00:02:09.159 ++ SPDK_RUN_UBSAN=1 00:02:09.159 ++ SPDK_TEST_NVMF_MDNS=1 00:02:09.159 ++ NET_TYPE=virt 00:02:09.159 ++ SPDK_JSONRPC_GO_CLIENT=1 00:02:09.159 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:09.159 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:09.159 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:09.159 ++ RUN_NIGHTLY=1 00:02:09.159 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:09.159 + [[ -n '' ]] 00:02:09.159 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:09.159 + for M in /var/spdk/build-*-manifest.txt 00:02:09.159 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:09.159 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:09.159 + for M in /var/spdk/build-*-manifest.txt 00:02:09.159 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:09.159 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:09.159 ++ uname 00:02:09.159 + [[ Linux == \L\i\n\u\x ]] 00:02:09.159 + sudo dmesg -T 00:02:09.159 + sudo dmesg --clear 00:02:09.159 + dmesg_pid=5900 00:02:09.159 + sudo dmesg -Tw 00:02:09.159 + [[ Fedora Linux == FreeBSD ]] 00:02:09.159 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:09.159 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:09.159 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:09.159 + [[ -x /usr/src/fio-static/fio ]] 00:02:09.159 + export FIO_BIN=/usr/src/fio-static/fio 00:02:09.159 + FIO_BIN=/usr/src/fio-static/fio 00:02:09.159 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:09.159 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:09.159 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:09.159 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:09.159 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:09.159 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:09.159 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:09.159 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:09.159 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:09.159 Test configuration: 00:02:09.159 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:09.159 SPDK_TEST_NVMF=1 00:02:09.159 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:09.159 SPDK_TEST_USDT=1 00:02:09.159 SPDK_RUN_UBSAN=1 00:02:09.159 SPDK_TEST_NVMF_MDNS=1 00:02:09.159 NET_TYPE=virt 00:02:09.159 SPDK_JSONRPC_GO_CLIENT=1 00:02:09.159 SPDK_TEST_NATIVE_DPDK=v23.11 00:02:09.159 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:09.159 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:09.417 RUN_NIGHTLY=1 20:01:58 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:09.417 20:01:58 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:09.418 20:01:58 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:09.418 20:01:58 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:09.418 20:01:58 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:09.418 20:01:58 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:09.418 20:01:58 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:09.418 20:01:58 -- paths/export.sh@5 -- $ export PATH 00:02:09.418 20:01:58 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:09.418 20:01:58 -- common/autobuild_common.sh@436 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:09.418 20:01:58 -- common/autobuild_common.sh@437 -- $ date +%s 00:02:09.418 20:01:58 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1720987318.XXXXXX 00:02:09.418 20:01:58 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1720987318.99Ou43 00:02:09.418 20:01:58 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:02:09.418 20:01:58 -- common/autobuild_common.sh@443 -- $ '[' -n v23.11 ']' 00:02:09.418 20:01:58 -- common/autobuild_common.sh@444 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:09.418 20:01:58 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:09.418 20:01:58 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:09.418 20:01:58 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:09.418 20:01:58 -- common/autobuild_common.sh@453 -- $ get_config_params 00:02:09.418 20:01:58 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:02:09.418 20:01:58 -- common/autotest_common.sh@10 -- $ set +x 00:02:09.418 20:01:58 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang' 00:02:09.418 20:01:58 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:02:09.418 20:01:58 -- pm/common@17 -- $ local monitor 00:02:09.418 20:01:58 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:09.418 20:01:58 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:09.418 20:01:58 -- pm/common@25 -- $ sleep 1 00:02:09.418 20:01:58 -- pm/common@21 -- $ date +%s 00:02:09.418 20:01:58 -- pm/common@21 -- $ date +%s 00:02:09.418 20:01:58 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1720987318 00:02:09.418 20:01:58 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1720987318 00:02:09.418 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1720987318_collect-vmstat.pm.log 00:02:09.418 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1720987318_collect-cpu-load.pm.log 00:02:10.350 20:01:59 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:02:10.350 20:01:59 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:10.350 20:01:59 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:10.350 20:01:59 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:10.350 20:01:59 -- spdk/autobuild.sh@16 -- $ date -u 00:02:10.350 Sun Jul 14 08:01:59 PM UTC 2024 00:02:10.350 20:01:59 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:10.350 v24.05-13-g5fa2f5086 00:02:10.350 20:01:59 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:10.350 20:01:59 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:10.350 20:01:59 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:10.350 20:01:59 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:02:10.350 20:01:59 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:02:10.350 20:01:59 -- common/autotest_common.sh@10 -- $ set +x 00:02:10.350 ************************************ 00:02:10.350 START TEST ubsan 00:02:10.350 ************************************ 00:02:10.350 using ubsan 00:02:10.350 20:01:59 ubsan -- common/autotest_common.sh@1121 -- $ echo 'using ubsan' 00:02:10.350 00:02:10.350 real 0m0.001s 00:02:10.350 user 0m0.000s 00:02:10.350 sys 0m0.000s 00:02:10.350 20:01:59 ubsan -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:02:10.350 ************************************ 00:02:10.350 END TEST ubsan 00:02:10.350 ************************************ 00:02:10.350 20:01:59 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:10.350 20:01:59 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:02:10.350 20:01:59 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:10.350 20:01:59 -- common/autobuild_common.sh@429 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:10.350 20:01:59 -- common/autotest_common.sh@1097 -- $ '[' 2 -le 1 ']' 00:02:10.350 20:01:59 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:02:10.350 20:01:59 -- common/autotest_common.sh@10 -- $ set +x 00:02:10.350 ************************************ 00:02:10.350 START TEST build_native_dpdk 00:02:10.350 ************************************ 00:02:10.350 20:01:59 build_native_dpdk -- common/autotest_common.sh@1121 -- $ _build_native_dpdk 00:02:10.350 20:01:59 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:10.350 20:01:59 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:10.350 20:01:59 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:10.350 20:01:59 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:10.350 20:01:59 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:10.351 20:01:59 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:10.351 20:01:59 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:10.351 20:01:59 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:10.351 20:01:59 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:10.351 20:01:59 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:10.351 20:01:59 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:10.351 20:01:59 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:10.351 20:01:59 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:10.351 20:01:59 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:10.351 20:01:59 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:10.351 20:01:59 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:10.351 20:01:59 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:10.351 20:01:59 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:10.351 20:01:59 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:10.351 20:01:59 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:10.351 eeb0605f11 version: 23.11.0 00:02:10.351 238778122a doc: update release notes for 23.11 00:02:10.351 46aa6b3cfc doc: fix description of RSS features 00:02:10.351 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:02:10.351 7e421ae345 devtools: support skipping forbid rule check 00:02:10.351 20:01:59 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:10.351 20:01:59 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:10.351 20:01:59 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:02:10.351 20:01:59 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:10.351 20:01:59 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:10.351 20:01:59 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:10.351 20:01:59 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:10.351 20:01:59 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:10.351 20:01:59 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:10.351 20:01:59 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:10.351 20:01:59 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:10.351 20:01:59 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:10.351 20:01:59 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:10.351 20:01:59 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:10.351 20:01:59 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:10.351 20:01:59 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:02:10.351 20:01:59 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:10.351 20:01:59 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:02:10.351 20:01:59 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:02:10.351 20:01:59 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:02:10.351 20:01:59 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:02:10.351 20:01:59 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:02:10.351 20:01:59 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:02:10.351 20:01:59 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:02:10.351 20:01:59 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:02:10.609 20:01:59 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:02:10.609 20:01:59 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:02:10.609 20:01:59 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:02:10.609 20:01:59 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:02:10.609 20:01:59 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:02:10.609 20:01:59 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:02:10.609 20:01:59 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:02:10.609 20:01:59 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:10.609 20:01:59 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 23 00:02:10.609 20:01:59 build_native_dpdk -- scripts/common.sh@350 -- $ local d=23 00:02:10.609 20:01:59 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:10.609 20:01:59 build_native_dpdk -- scripts/common.sh@352 -- $ echo 23 00:02:10.609 20:01:59 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=23 00:02:10.609 20:01:59 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:02:10.609 20:01:59 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:02:10.609 20:01:59 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:10.609 20:01:59 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:02:10.609 20:01:59 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:02:10.609 20:01:59 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:02:10.609 20:01:59 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:02:10.609 20:01:59 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:10.609 patching file config/rte_config.h 00:02:10.609 Hunk #1 succeeded at 60 (offset 1 line). 00:02:10.609 20:01:59 build_native_dpdk -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:02:10.609 20:01:59 build_native_dpdk -- common/autobuild_common.sh@178 -- $ uname -s 00:02:10.609 20:01:59 build_native_dpdk -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:02:10.609 20:01:59 build_native_dpdk -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:10.609 20:01:59 build_native_dpdk -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:15.873 The Meson build system 00:02:15.873 Version: 1.3.1 00:02:15.873 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:15.873 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:15.873 Build type: native build 00:02:15.873 Program cat found: YES (/usr/bin/cat) 00:02:15.873 Project name: DPDK 00:02:15.873 Project version: 23.11.0 00:02:15.873 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:15.873 C linker for the host machine: gcc ld.bfd 2.39-16 00:02:15.873 Host machine cpu family: x86_64 00:02:15.873 Host machine cpu: x86_64 00:02:15.873 Message: ## Building in Developer Mode ## 00:02:15.873 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:15.873 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:15.873 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:15.873 Program python3 found: YES (/usr/bin/python3) 00:02:15.873 Program cat found: YES (/usr/bin/cat) 00:02:15.873 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:15.873 Compiler for C supports arguments -march=native: YES 00:02:15.873 Checking for size of "void *" : 8 00:02:15.873 Checking for size of "void *" : 8 (cached) 00:02:15.873 Library m found: YES 00:02:15.873 Library numa found: YES 00:02:15.873 Has header "numaif.h" : YES 00:02:15.873 Library fdt found: NO 00:02:15.873 Library execinfo found: NO 00:02:15.873 Has header "execinfo.h" : YES 00:02:15.873 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:15.873 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:15.873 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:15.873 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:15.873 Run-time dependency openssl found: YES 3.0.9 00:02:15.873 Run-time dependency libpcap found: YES 1.10.4 00:02:15.873 Has header "pcap.h" with dependency libpcap: YES 00:02:15.873 Compiler for C supports arguments -Wcast-qual: YES 00:02:15.873 Compiler for C supports arguments -Wdeprecated: YES 00:02:15.873 Compiler for C supports arguments -Wformat: YES 00:02:15.873 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:15.873 Compiler for C supports arguments -Wformat-security: NO 00:02:15.873 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:15.873 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:15.873 Compiler for C supports arguments -Wnested-externs: YES 00:02:15.873 Compiler for C supports arguments -Wold-style-definition: YES 00:02:15.873 Compiler for C supports arguments -Wpointer-arith: YES 00:02:15.873 Compiler for C supports arguments -Wsign-compare: YES 00:02:15.873 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:15.873 Compiler for C supports arguments -Wundef: YES 00:02:15.873 Compiler for C supports arguments -Wwrite-strings: YES 00:02:15.873 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:15.873 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:15.873 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:15.873 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:15.873 Program objdump found: YES (/usr/bin/objdump) 00:02:15.873 Compiler for C supports arguments -mavx512f: YES 00:02:15.873 Checking if "AVX512 checking" compiles: YES 00:02:15.873 Fetching value of define "__SSE4_2__" : 1 00:02:15.873 Fetching value of define "__AES__" : 1 00:02:15.873 Fetching value of define "__AVX__" : 1 00:02:15.873 Fetching value of define "__AVX2__" : 1 00:02:15.873 Fetching value of define "__AVX512BW__" : (undefined) 00:02:15.873 Fetching value of define "__AVX512CD__" : (undefined) 00:02:15.873 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:15.873 Fetching value of define "__AVX512F__" : (undefined) 00:02:15.873 Fetching value of define "__AVX512VL__" : (undefined) 00:02:15.873 Fetching value of define "__PCLMUL__" : 1 00:02:15.873 Fetching value of define "__RDRND__" : 1 00:02:15.873 Fetching value of define "__RDSEED__" : 1 00:02:15.873 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:15.873 Fetching value of define "__znver1__" : (undefined) 00:02:15.873 Fetching value of define "__znver2__" : (undefined) 00:02:15.873 Fetching value of define "__znver3__" : (undefined) 00:02:15.873 Fetching value of define "__znver4__" : (undefined) 00:02:15.873 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:15.873 Message: lib/log: Defining dependency "log" 00:02:15.873 Message: lib/kvargs: Defining dependency "kvargs" 00:02:15.873 Message: lib/telemetry: Defining dependency "telemetry" 00:02:15.873 Checking for function "getentropy" : NO 00:02:15.873 Message: lib/eal: Defining dependency "eal" 00:02:15.873 Message: lib/ring: Defining dependency "ring" 00:02:15.873 Message: lib/rcu: Defining dependency "rcu" 00:02:15.873 Message: lib/mempool: Defining dependency "mempool" 00:02:15.873 Message: lib/mbuf: Defining dependency "mbuf" 00:02:15.873 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:15.873 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:15.873 Compiler for C supports arguments -mpclmul: YES 00:02:15.873 Compiler for C supports arguments -maes: YES 00:02:15.873 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:15.873 Compiler for C supports arguments -mavx512bw: YES 00:02:15.873 Compiler for C supports arguments -mavx512dq: YES 00:02:15.873 Compiler for C supports arguments -mavx512vl: YES 00:02:15.873 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:15.873 Compiler for C supports arguments -mavx2: YES 00:02:15.873 Compiler for C supports arguments -mavx: YES 00:02:15.873 Message: lib/net: Defining dependency "net" 00:02:15.873 Message: lib/meter: Defining dependency "meter" 00:02:15.873 Message: lib/ethdev: Defining dependency "ethdev" 00:02:15.873 Message: lib/pci: Defining dependency "pci" 00:02:15.873 Message: lib/cmdline: Defining dependency "cmdline" 00:02:15.873 Message: lib/metrics: Defining dependency "metrics" 00:02:15.873 Message: lib/hash: Defining dependency "hash" 00:02:15.873 Message: lib/timer: Defining dependency "timer" 00:02:15.873 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:15.873 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:02:15.873 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:02:15.873 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:02:15.873 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:02:15.873 Message: lib/acl: Defining dependency "acl" 00:02:15.873 Message: lib/bbdev: Defining dependency "bbdev" 00:02:15.873 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:15.873 Run-time dependency libelf found: YES 0.190 00:02:15.873 Message: lib/bpf: Defining dependency "bpf" 00:02:15.873 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:15.873 Message: lib/compressdev: Defining dependency "compressdev" 00:02:15.873 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:15.873 Message: lib/distributor: Defining dependency "distributor" 00:02:15.873 Message: lib/dmadev: Defining dependency "dmadev" 00:02:15.873 Message: lib/efd: Defining dependency "efd" 00:02:15.873 Message: lib/eventdev: Defining dependency "eventdev" 00:02:15.873 Message: lib/dispatcher: Defining dependency "dispatcher" 00:02:15.873 Message: lib/gpudev: Defining dependency "gpudev" 00:02:15.873 Message: lib/gro: Defining dependency "gro" 00:02:15.873 Message: lib/gso: Defining dependency "gso" 00:02:15.873 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:15.873 Message: lib/jobstats: Defining dependency "jobstats" 00:02:15.873 Message: lib/latencystats: Defining dependency "latencystats" 00:02:15.873 Message: lib/lpm: Defining dependency "lpm" 00:02:15.873 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:15.873 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:15.873 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:15.873 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:15.873 Message: lib/member: Defining dependency "member" 00:02:15.873 Message: lib/pcapng: Defining dependency "pcapng" 00:02:15.873 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:15.873 Message: lib/power: Defining dependency "power" 00:02:15.873 Message: lib/rawdev: Defining dependency "rawdev" 00:02:15.873 Message: lib/regexdev: Defining dependency "regexdev" 00:02:15.873 Message: lib/mldev: Defining dependency "mldev" 00:02:15.873 Message: lib/rib: Defining dependency "rib" 00:02:15.873 Message: lib/reorder: Defining dependency "reorder" 00:02:15.873 Message: lib/sched: Defining dependency "sched" 00:02:15.873 Message: lib/security: Defining dependency "security" 00:02:15.873 Message: lib/stack: Defining dependency "stack" 00:02:15.873 Has header "linux/userfaultfd.h" : YES 00:02:15.873 Has header "linux/vduse.h" : YES 00:02:15.873 Message: lib/vhost: Defining dependency "vhost" 00:02:15.873 Message: lib/ipsec: Defining dependency "ipsec" 00:02:15.873 Message: lib/pdcp: Defining dependency "pdcp" 00:02:15.873 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:15.873 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:15.873 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:02:15.873 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:15.873 Message: lib/fib: Defining dependency "fib" 00:02:15.873 Message: lib/port: Defining dependency "port" 00:02:15.873 Message: lib/pdump: Defining dependency "pdump" 00:02:15.873 Message: lib/table: Defining dependency "table" 00:02:15.873 Message: lib/pipeline: Defining dependency "pipeline" 00:02:15.873 Message: lib/graph: Defining dependency "graph" 00:02:15.873 Message: lib/node: Defining dependency "node" 00:02:15.873 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:17.249 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:17.249 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:17.249 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:17.249 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:17.249 Compiler for C supports arguments -Wno-unused-value: YES 00:02:17.249 Compiler for C supports arguments -Wno-format: YES 00:02:17.249 Compiler for C supports arguments -Wno-format-security: YES 00:02:17.249 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:17.249 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:17.249 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:17.249 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:17.249 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:17.249 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:17.249 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:17.249 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:17.249 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:17.249 Has header "sys/epoll.h" : YES 00:02:17.249 Program doxygen found: YES (/usr/bin/doxygen) 00:02:17.249 Configuring doxy-api-html.conf using configuration 00:02:17.249 Configuring doxy-api-man.conf using configuration 00:02:17.249 Program mandb found: YES (/usr/bin/mandb) 00:02:17.249 Program sphinx-build found: NO 00:02:17.249 Configuring rte_build_config.h using configuration 00:02:17.249 Message: 00:02:17.249 ================= 00:02:17.249 Applications Enabled 00:02:17.249 ================= 00:02:17.249 00:02:17.249 apps: 00:02:17.249 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:02:17.249 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:02:17.249 test-pmd, test-regex, test-sad, test-security-perf, 00:02:17.249 00:02:17.249 Message: 00:02:17.249 ================= 00:02:17.249 Libraries Enabled 00:02:17.249 ================= 00:02:17.249 00:02:17.249 libs: 00:02:17.249 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:17.249 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:02:17.249 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:02:17.249 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:02:17.249 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:02:17.249 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:02:17.249 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:02:17.249 00:02:17.249 00:02:17.249 Message: 00:02:17.249 =============== 00:02:17.249 Drivers Enabled 00:02:17.249 =============== 00:02:17.249 00:02:17.249 common: 00:02:17.249 00:02:17.249 bus: 00:02:17.249 pci, vdev, 00:02:17.249 mempool: 00:02:17.249 ring, 00:02:17.249 dma: 00:02:17.249 00:02:17.249 net: 00:02:17.249 i40e, 00:02:17.249 raw: 00:02:17.249 00:02:17.249 crypto: 00:02:17.249 00:02:17.249 compress: 00:02:17.249 00:02:17.249 regex: 00:02:17.249 00:02:17.249 ml: 00:02:17.249 00:02:17.249 vdpa: 00:02:17.249 00:02:17.249 event: 00:02:17.249 00:02:17.249 baseband: 00:02:17.249 00:02:17.249 gpu: 00:02:17.249 00:02:17.249 00:02:17.249 Message: 00:02:17.249 ================= 00:02:17.249 Content Skipped 00:02:17.249 ================= 00:02:17.249 00:02:17.249 apps: 00:02:17.249 00:02:17.249 libs: 00:02:17.249 00:02:17.249 drivers: 00:02:17.249 common/cpt: not in enabled drivers build config 00:02:17.249 common/dpaax: not in enabled drivers build config 00:02:17.249 common/iavf: not in enabled drivers build config 00:02:17.249 common/idpf: not in enabled drivers build config 00:02:17.249 common/mvep: not in enabled drivers build config 00:02:17.249 common/octeontx: not in enabled drivers build config 00:02:17.249 bus/auxiliary: not in enabled drivers build config 00:02:17.249 bus/cdx: not in enabled drivers build config 00:02:17.249 bus/dpaa: not in enabled drivers build config 00:02:17.249 bus/fslmc: not in enabled drivers build config 00:02:17.249 bus/ifpga: not in enabled drivers build config 00:02:17.249 bus/platform: not in enabled drivers build config 00:02:17.249 bus/vmbus: not in enabled drivers build config 00:02:17.249 common/cnxk: not in enabled drivers build config 00:02:17.249 common/mlx5: not in enabled drivers build config 00:02:17.249 common/nfp: not in enabled drivers build config 00:02:17.249 common/qat: not in enabled drivers build config 00:02:17.249 common/sfc_efx: not in enabled drivers build config 00:02:17.249 mempool/bucket: not in enabled drivers build config 00:02:17.249 mempool/cnxk: not in enabled drivers build config 00:02:17.249 mempool/dpaa: not in enabled drivers build config 00:02:17.249 mempool/dpaa2: not in enabled drivers build config 00:02:17.249 mempool/octeontx: not in enabled drivers build config 00:02:17.249 mempool/stack: not in enabled drivers build config 00:02:17.249 dma/cnxk: not in enabled drivers build config 00:02:17.249 dma/dpaa: not in enabled drivers build config 00:02:17.249 dma/dpaa2: not in enabled drivers build config 00:02:17.249 dma/hisilicon: not in enabled drivers build config 00:02:17.249 dma/idxd: not in enabled drivers build config 00:02:17.249 dma/ioat: not in enabled drivers build config 00:02:17.249 dma/skeleton: not in enabled drivers build config 00:02:17.249 net/af_packet: not in enabled drivers build config 00:02:17.249 net/af_xdp: not in enabled drivers build config 00:02:17.249 net/ark: not in enabled drivers build config 00:02:17.249 net/atlantic: not in enabled drivers build config 00:02:17.249 net/avp: not in enabled drivers build config 00:02:17.249 net/axgbe: not in enabled drivers build config 00:02:17.249 net/bnx2x: not in enabled drivers build config 00:02:17.249 net/bnxt: not in enabled drivers build config 00:02:17.249 net/bonding: not in enabled drivers build config 00:02:17.249 net/cnxk: not in enabled drivers build config 00:02:17.249 net/cpfl: not in enabled drivers build config 00:02:17.249 net/cxgbe: not in enabled drivers build config 00:02:17.249 net/dpaa: not in enabled drivers build config 00:02:17.249 net/dpaa2: not in enabled drivers build config 00:02:17.249 net/e1000: not in enabled drivers build config 00:02:17.249 net/ena: not in enabled drivers build config 00:02:17.249 net/enetc: not in enabled drivers build config 00:02:17.249 net/enetfec: not in enabled drivers build config 00:02:17.249 net/enic: not in enabled drivers build config 00:02:17.250 net/failsafe: not in enabled drivers build config 00:02:17.250 net/fm10k: not in enabled drivers build config 00:02:17.250 net/gve: not in enabled drivers build config 00:02:17.250 net/hinic: not in enabled drivers build config 00:02:17.250 net/hns3: not in enabled drivers build config 00:02:17.250 net/iavf: not in enabled drivers build config 00:02:17.250 net/ice: not in enabled drivers build config 00:02:17.250 net/idpf: not in enabled drivers build config 00:02:17.250 net/igc: not in enabled drivers build config 00:02:17.250 net/ionic: not in enabled drivers build config 00:02:17.250 net/ipn3ke: not in enabled drivers build config 00:02:17.250 net/ixgbe: not in enabled drivers build config 00:02:17.250 net/mana: not in enabled drivers build config 00:02:17.250 net/memif: not in enabled drivers build config 00:02:17.250 net/mlx4: not in enabled drivers build config 00:02:17.250 net/mlx5: not in enabled drivers build config 00:02:17.250 net/mvneta: not in enabled drivers build config 00:02:17.250 net/mvpp2: not in enabled drivers build config 00:02:17.250 net/netvsc: not in enabled drivers build config 00:02:17.250 net/nfb: not in enabled drivers build config 00:02:17.250 net/nfp: not in enabled drivers build config 00:02:17.250 net/ngbe: not in enabled drivers build config 00:02:17.250 net/null: not in enabled drivers build config 00:02:17.250 net/octeontx: not in enabled drivers build config 00:02:17.250 net/octeon_ep: not in enabled drivers build config 00:02:17.250 net/pcap: not in enabled drivers build config 00:02:17.250 net/pfe: not in enabled drivers build config 00:02:17.250 net/qede: not in enabled drivers build config 00:02:17.250 net/ring: not in enabled drivers build config 00:02:17.250 net/sfc: not in enabled drivers build config 00:02:17.250 net/softnic: not in enabled drivers build config 00:02:17.250 net/tap: not in enabled drivers build config 00:02:17.250 net/thunderx: not in enabled drivers build config 00:02:17.250 net/txgbe: not in enabled drivers build config 00:02:17.250 net/vdev_netvsc: not in enabled drivers build config 00:02:17.250 net/vhost: not in enabled drivers build config 00:02:17.250 net/virtio: not in enabled drivers build config 00:02:17.250 net/vmxnet3: not in enabled drivers build config 00:02:17.250 raw/cnxk_bphy: not in enabled drivers build config 00:02:17.250 raw/cnxk_gpio: not in enabled drivers build config 00:02:17.250 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:17.250 raw/ifpga: not in enabled drivers build config 00:02:17.250 raw/ntb: not in enabled drivers build config 00:02:17.250 raw/skeleton: not in enabled drivers build config 00:02:17.250 crypto/armv8: not in enabled drivers build config 00:02:17.250 crypto/bcmfs: not in enabled drivers build config 00:02:17.250 crypto/caam_jr: not in enabled drivers build config 00:02:17.250 crypto/ccp: not in enabled drivers build config 00:02:17.250 crypto/cnxk: not in enabled drivers build config 00:02:17.250 crypto/dpaa_sec: not in enabled drivers build config 00:02:17.250 crypto/dpaa2_sec: not in enabled drivers build config 00:02:17.250 crypto/ipsec_mb: not in enabled drivers build config 00:02:17.250 crypto/mlx5: not in enabled drivers build config 00:02:17.250 crypto/mvsam: not in enabled drivers build config 00:02:17.250 crypto/nitrox: not in enabled drivers build config 00:02:17.250 crypto/null: not in enabled drivers build config 00:02:17.250 crypto/octeontx: not in enabled drivers build config 00:02:17.250 crypto/openssl: not in enabled drivers build config 00:02:17.250 crypto/scheduler: not in enabled drivers build config 00:02:17.250 crypto/uadk: not in enabled drivers build config 00:02:17.250 crypto/virtio: not in enabled drivers build config 00:02:17.250 compress/isal: not in enabled drivers build config 00:02:17.250 compress/mlx5: not in enabled drivers build config 00:02:17.250 compress/octeontx: not in enabled drivers build config 00:02:17.250 compress/zlib: not in enabled drivers build config 00:02:17.250 regex/mlx5: not in enabled drivers build config 00:02:17.250 regex/cn9k: not in enabled drivers build config 00:02:17.250 ml/cnxk: not in enabled drivers build config 00:02:17.250 vdpa/ifc: not in enabled drivers build config 00:02:17.250 vdpa/mlx5: not in enabled drivers build config 00:02:17.250 vdpa/nfp: not in enabled drivers build config 00:02:17.250 vdpa/sfc: not in enabled drivers build config 00:02:17.250 event/cnxk: not in enabled drivers build config 00:02:17.250 event/dlb2: not in enabled drivers build config 00:02:17.250 event/dpaa: not in enabled drivers build config 00:02:17.250 event/dpaa2: not in enabled drivers build config 00:02:17.250 event/dsw: not in enabled drivers build config 00:02:17.250 event/opdl: not in enabled drivers build config 00:02:17.250 event/skeleton: not in enabled drivers build config 00:02:17.250 event/sw: not in enabled drivers build config 00:02:17.250 event/octeontx: not in enabled drivers build config 00:02:17.250 baseband/acc: not in enabled drivers build config 00:02:17.250 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:17.250 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:17.250 baseband/la12xx: not in enabled drivers build config 00:02:17.250 baseband/null: not in enabled drivers build config 00:02:17.250 baseband/turbo_sw: not in enabled drivers build config 00:02:17.250 gpu/cuda: not in enabled drivers build config 00:02:17.250 00:02:17.250 00:02:17.250 Build targets in project: 220 00:02:17.250 00:02:17.250 DPDK 23.11.0 00:02:17.250 00:02:17.250 User defined options 00:02:17.250 libdir : lib 00:02:17.250 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:17.250 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:17.250 c_link_args : 00:02:17.250 enable_docs : false 00:02:17.250 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:17.250 enable_kmods : false 00:02:17.250 machine : native 00:02:17.250 tests : false 00:02:17.250 00:02:17.250 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:17.250 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:17.509 20:02:06 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:17.509 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:17.509 [1/710] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:17.509 [2/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:17.509 [3/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:17.509 [4/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:17.509 [5/710] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:17.509 [6/710] Linking static target lib/librte_kvargs.a 00:02:17.767 [7/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:17.767 [8/710] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:17.767 [9/710] Linking static target lib/librte_log.a 00:02:17.767 [10/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:17.767 [11/710] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.025 [12/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:18.025 [13/710] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.025 [14/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:18.025 [15/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:18.025 [16/710] Linking target lib/librte_log.so.24.0 00:02:18.283 [17/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:18.283 [18/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:18.541 [19/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:18.541 [20/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:18.541 [21/710] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:18.541 [22/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:18.541 [23/710] Linking target lib/librte_kvargs.so.24.0 00:02:18.541 [24/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:18.800 [25/710] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:18.800 [26/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:18.800 [27/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:18.800 [28/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:18.800 [29/710] Linking static target lib/librte_telemetry.a 00:02:18.800 [30/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:18.800 [31/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:19.058 [32/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:19.058 [33/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:19.316 [34/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:19.316 [35/710] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.316 [36/710] Linking target lib/librte_telemetry.so.24.0 00:02:19.316 [37/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:19.316 [38/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:19.316 [39/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:19.316 [40/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:19.316 [41/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:19.316 [42/710] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:19.316 [43/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:19.316 [44/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:19.574 [45/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:19.833 [46/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:19.833 [47/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:19.833 [48/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:20.092 [49/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:20.092 [50/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:20.092 [51/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:20.092 [52/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:20.092 [53/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:20.092 [54/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:20.351 [55/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:20.351 [56/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:20.351 [57/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:20.351 [58/710] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:20.351 [59/710] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:20.610 [60/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:20.610 [61/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:20.610 [62/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:20.610 [63/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:20.610 [64/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:20.610 [65/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:20.868 [66/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:20.868 [67/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:20.868 [68/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:21.127 [69/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:21.127 [70/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:21.127 [71/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:21.127 [72/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:21.127 [73/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:21.387 [74/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:21.387 [75/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:21.387 [76/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:21.387 [77/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:21.387 [78/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:21.387 [79/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:21.646 [80/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:21.646 [81/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:21.646 [82/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:21.903 [83/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:21.903 [84/710] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:21.903 [85/710] Linking static target lib/librte_ring.a 00:02:21.903 [86/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:21.903 [87/710] Linking static target lib/librte_eal.a 00:02:21.903 [88/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:22.161 [89/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:22.162 [90/710] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.420 [91/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:22.420 [92/710] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:22.420 [93/710] Linking static target lib/librte_mempool.a 00:02:22.420 [94/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:22.420 [95/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:22.679 [96/710] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:22.679 [97/710] Linking static target lib/librte_rcu.a 00:02:22.679 [98/710] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:22.679 [99/710] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:22.679 [100/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:22.937 [101/710] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.937 [102/710] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.937 [103/710] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:22.937 [104/710] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:22.937 [105/710] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:23.196 [106/710] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:23.196 [107/710] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:23.196 [108/710] Linking static target lib/librte_net.a 00:02:23.454 [109/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:23.454 [110/710] Linking static target lib/librte_mbuf.a 00:02:23.454 [111/710] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:23.454 [112/710] Linking static target lib/librte_meter.a 00:02:23.454 [113/710] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.454 [114/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:23.713 [115/710] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.713 [116/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:23.713 [117/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:23.713 [118/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:23.971 [119/710] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.229 [120/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:24.487 [121/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:24.745 [122/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:24.745 [123/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:24.745 [124/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:24.745 [125/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:24.745 [126/710] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:24.745 [127/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:24.745 [128/710] Linking static target lib/librte_pci.a 00:02:25.002 [129/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:25.002 [130/710] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.002 [131/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:25.002 [132/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:25.002 [133/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:25.260 [134/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:25.260 [135/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:25.260 [136/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:25.260 [137/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:25.260 [138/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:25.260 [139/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:25.260 [140/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:25.518 [141/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:25.518 [142/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:25.518 [143/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:25.518 [144/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:25.518 [145/710] Linking static target lib/librte_cmdline.a 00:02:25.775 [146/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:25.775 [147/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:25.775 [148/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:25.775 [149/710] Linking static target lib/librte_metrics.a 00:02:26.034 [150/710] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:26.291 [151/710] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.548 [152/710] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.548 [153/710] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:26.548 [154/710] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:26.548 [155/710] Linking static target lib/librte_timer.a 00:02:26.808 [156/710] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.065 [157/710] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:27.065 [158/710] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:27.322 [159/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:27.322 [160/710] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:27.889 [161/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:27.889 [162/710] Linking static target lib/librte_ethdev.a 00:02:27.889 [163/710] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:27.889 [164/710] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.889 [165/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:27.889 [166/710] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:27.889 [167/710] Linking static target lib/librte_bitratestats.a 00:02:28.146 [168/710] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:28.146 [169/710] Linking target lib/librte_eal.so.24.0 00:02:28.146 [170/710] Linking static target lib/librte_bbdev.a 00:02:28.146 [171/710] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:28.146 [172/710] Linking static target lib/librte_hash.a 00:02:28.146 [173/710] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:28.146 [174/710] Linking target lib/librte_ring.so.24.0 00:02:28.146 [175/710] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.146 [176/710] Linking target lib/librte_meter.so.24.0 00:02:28.405 [177/710] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:28.405 [178/710] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:02:28.405 [179/710] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:28.405 [180/710] Linking target lib/librte_rcu.so.24.0 00:02:28.405 [181/710] Linking target lib/librte_mempool.so.24.0 00:02:28.405 [182/710] Linking target lib/librte_pci.so.24.0 00:02:28.405 [183/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:28.663 [184/710] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:28.663 [185/710] Linking target lib/librte_timer.so.24.0 00:02:28.663 [186/710] Linking static target lib/acl/libavx2_tmp.a 00:02:28.663 [187/710] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:28.663 [188/710] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:28.663 [189/710] Linking target lib/librte_mbuf.so.24.0 00:02:28.663 [190/710] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:28.663 [191/710] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.663 [192/710] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:02:28.663 [193/710] Linking static target lib/acl/libavx512_tmp.a 00:02:28.663 [194/710] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.663 [195/710] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:28.663 [196/710] Linking target lib/librte_net.so.24.0 00:02:28.921 [197/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:28.921 [198/710] Linking target lib/librte_bbdev.so.24.0 00:02:28.921 [199/710] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:28.921 [200/710] Linking target lib/librte_cmdline.so.24.0 00:02:28.921 [201/710] Linking target lib/librte_hash.so.24.0 00:02:28.921 [202/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:28.921 [203/710] Linking static target lib/librte_acl.a 00:02:29.180 [204/710] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:29.180 [205/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:29.180 [206/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:29.454 [207/710] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.454 [208/710] Linking target lib/librte_acl.so.24.0 00:02:29.454 [209/710] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:29.454 [210/710] Linking static target lib/librte_cfgfile.a 00:02:29.454 [211/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:29.454 [212/710] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:02:29.723 [213/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:29.723 [214/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:29.723 [215/710] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.723 [216/710] Linking target lib/librte_cfgfile.so.24.0 00:02:29.981 [217/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:29.981 [218/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:29.981 [219/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:29.981 [220/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:29.981 [221/710] Linking static target lib/librte_bpf.a 00:02:30.240 [222/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:30.240 [223/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:30.240 [224/710] Linking static target lib/librte_compressdev.a 00:02:30.240 [225/710] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.499 [226/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:30.499 [227/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:30.499 [228/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:30.757 [229/710] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.757 [230/710] Linking target lib/librte_compressdev.so.24.0 00:02:30.757 [231/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:30.757 [232/710] Linking static target lib/librte_distributor.a 00:02:30.757 [233/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:31.014 [234/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:31.014 [235/710] Linking static target lib/librte_dmadev.a 00:02:31.014 [236/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:31.015 [237/710] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.015 [238/710] Linking target lib/librte_distributor.so.24.0 00:02:31.273 [239/710] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.273 [240/710] Linking target lib/librte_dmadev.so.24.0 00:02:31.273 [241/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:31.531 [242/710] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:31.789 [243/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:31.789 [244/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:02:31.789 [245/710] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:31.789 [246/710] Linking static target lib/librte_efd.a 00:02:32.047 [247/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:32.047 [248/710] Linking static target lib/librte_cryptodev.a 00:02:32.047 [249/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:32.047 [250/710] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.047 [251/710] Linking target lib/librte_efd.so.24.0 00:02:32.613 [252/710] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:02:32.613 [253/710] Linking static target lib/librte_dispatcher.a 00:02:32.613 [254/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:32.871 [255/710] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:32.871 [256/710] Linking static target lib/librte_gpudev.a 00:02:32.871 [257/710] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.871 [258/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:32.871 [259/710] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:32.871 [260/710] Linking target lib/librte_ethdev.so.24.0 00:02:32.871 [261/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:33.130 [262/710] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.130 [263/710] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:33.130 [264/710] Linking target lib/librte_metrics.so.24.0 00:02:33.388 [265/710] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:02:33.388 [266/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:33.388 [267/710] Linking target lib/librte_bitratestats.so.24.0 00:02:33.388 [268/710] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.388 [269/710] Linking target lib/librte_bpf.so.24.0 00:02:33.388 [270/710] Linking target lib/librte_cryptodev.so.24.0 00:02:33.388 [271/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:02:33.388 [272/710] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:02:33.388 [273/710] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:33.647 [274/710] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.647 [275/710] Linking target lib/librte_gpudev.so.24.0 00:02:33.647 [276/710] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:33.905 [277/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:33.905 [278/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:33.905 [279/710] Linking static target lib/librte_eventdev.a 00:02:33.905 [280/710] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:33.905 [281/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:33.905 [282/710] Linking static target lib/librte_gro.a 00:02:33.905 [283/710] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:33.905 [284/710] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:34.163 [285/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:34.163 [286/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:34.163 [287/710] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.163 [288/710] Linking target lib/librte_gro.so.24.0 00:02:34.421 [289/710] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:34.421 [290/710] Linking static target lib/librte_gso.a 00:02:34.421 [291/710] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.421 [292/710] Linking target lib/librte_gso.so.24.0 00:02:34.679 [293/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:34.679 [294/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:34.679 [295/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:34.679 [296/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:34.936 [297/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:34.936 [298/710] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:34.936 [299/710] Linking static target lib/librte_jobstats.a 00:02:34.936 [300/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:34.936 [301/710] Linking static target lib/librte_ip_frag.a 00:02:34.936 [302/710] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:34.936 [303/710] Linking static target lib/librte_latencystats.a 00:02:35.194 [304/710] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.194 [305/710] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.194 [306/710] Linking target lib/librte_jobstats.so.24.0 00:02:35.194 [307/710] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.194 [308/710] Linking target lib/librte_latencystats.so.24.0 00:02:35.194 [309/710] Linking target lib/librte_ip_frag.so.24.0 00:02:35.452 [310/710] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:35.452 [311/710] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:35.452 [312/710] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:35.452 [313/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:35.452 [314/710] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:02:35.452 [315/710] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:35.452 [316/710] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:35.710 [317/710] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:35.710 [318/710] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.968 [319/710] Linking target lib/librte_eventdev.so.24.0 00:02:35.968 [320/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:35.968 [321/710] Linking static target lib/librte_lpm.a 00:02:35.968 [322/710] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:35.968 [323/710] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:02:35.968 [324/710] Linking target lib/librte_dispatcher.so.24.0 00:02:36.226 [325/710] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:36.226 [326/710] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:36.226 [327/710] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.226 [328/710] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:36.226 [329/710] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:36.226 [330/710] Linking static target lib/librte_pcapng.a 00:02:36.226 [331/710] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:36.226 [332/710] Linking target lib/librte_lpm.so.24.0 00:02:36.483 [333/710] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:02:36.483 [334/710] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:36.483 [335/710] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.483 [336/710] Linking target lib/librte_pcapng.so.24.0 00:02:36.741 [337/710] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:36.741 [338/710] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:02:36.741 [339/710] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:36.999 [340/710] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:36.999 [341/710] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:36.999 [342/710] Linking static target lib/librte_power.a 00:02:36.999 [343/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:02:36.999 [344/710] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:36.999 [345/710] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:36.999 [346/710] Linking static target lib/librte_regexdev.a 00:02:36.999 [347/710] Linking static target lib/librte_member.a 00:02:37.257 [348/710] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:37.257 [349/710] Linking static target lib/librte_rawdev.a 00:02:37.257 [350/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:02:37.257 [351/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:02:37.257 [352/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:02:37.515 [353/710] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.515 [354/710] Linking target lib/librte_member.so.24.0 00:02:37.515 [355/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:02:37.515 [356/710] Linking static target lib/librte_mldev.a 00:02:37.515 [357/710] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:37.515 [358/710] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.772 [359/710] Linking target lib/librte_rawdev.so.24.0 00:02:37.772 [360/710] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.772 [361/710] Linking target lib/librte_power.so.24.0 00:02:37.772 [362/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:37.773 [363/710] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.773 [364/710] Linking target lib/librte_regexdev.so.24.0 00:02:38.030 [365/710] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:38.030 [366/710] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:38.030 [367/710] Linking static target lib/librte_reorder.a 00:02:38.030 [368/710] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:38.288 [369/710] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:38.288 [370/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:38.288 [371/710] Linking static target lib/librte_rib.a 00:02:38.288 [372/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:38.288 [373/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:38.546 [374/710] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.546 [375/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:38.546 [376/710] Linking static target lib/librte_stack.a 00:02:38.546 [377/710] Linking target lib/librte_reorder.so.24.0 00:02:38.546 [378/710] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:38.546 [379/710] Linking static target lib/librte_security.a 00:02:38.546 [380/710] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:02:38.546 [381/710] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.804 [382/710] Linking target lib/librte_stack.so.24.0 00:02:38.804 [383/710] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.804 [384/710] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.804 [385/710] Linking target lib/librte_rib.so.24.0 00:02:38.804 [386/710] Linking target lib/librte_mldev.so.24.0 00:02:38.804 [387/710] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:38.804 [388/710] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:02:39.063 [389/710] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.063 [390/710] Linking target lib/librte_security.so.24.0 00:02:39.063 [391/710] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:39.063 [392/710] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:02:39.063 [393/710] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:39.321 [394/710] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:39.321 [395/710] Linking static target lib/librte_sched.a 00:02:39.579 [396/710] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.579 [397/710] Linking target lib/librte_sched.so.24.0 00:02:39.836 [398/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:39.836 [399/710] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:39.836 [400/710] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:02:39.836 [401/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:39.836 [402/710] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:40.401 [403/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:40.401 [404/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:40.401 [405/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:40.659 [406/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:02:40.659 [407/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:02:40.917 [408/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:40.917 [409/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:02:40.917 [410/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:40.917 [411/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:40.917 [412/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:02:40.917 [413/710] Linking static target lib/librte_ipsec.a 00:02:41.175 [414/710] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:02:41.175 [415/710] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:02:41.433 [416/710] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.433 [417/710] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:02:41.433 [418/710] Linking target lib/librte_ipsec.so.24.0 00:02:41.433 [419/710] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:02:41.433 [420/710] Linking static target lib/fib/libtrie_avx512_tmp.a 00:02:41.433 [421/710] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:02:41.692 [422/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:41.692 [423/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:42.257 [424/710] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:42.516 [425/710] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:42.516 [426/710] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:42.516 [427/710] Linking static target lib/librte_fib.a 00:02:42.516 [428/710] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:42.516 [429/710] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:42.516 [430/710] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:42.516 [431/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:02:42.774 [432/710] Linking static target lib/librte_pdcp.a 00:02:42.774 [433/710] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.774 [434/710] Linking target lib/librte_fib.so.24.0 00:02:43.031 [435/710] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.031 [436/710] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:43.031 [437/710] Linking target lib/librte_pdcp.so.24.0 00:02:43.597 [438/710] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:43.597 [439/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:43.597 [440/710] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:43.597 [441/710] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:43.597 [442/710] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:43.855 [443/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:43.855 [444/710] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:44.111 [445/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:44.111 [446/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:44.111 [447/710] Linking static target lib/librte_port.a 00:02:44.367 [448/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:44.367 [449/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:44.367 [450/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:44.624 [451/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:44.624 [452/710] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:44.624 [453/710] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:44.624 [454/710] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.624 [455/710] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:44.624 [456/710] Linking static target lib/librte_pdump.a 00:02:44.881 [457/710] Linking target lib/librte_port.so.24.0 00:02:44.881 [458/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:44.881 [459/710] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:02:44.881 [460/710] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.881 [461/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:45.139 [462/710] Linking target lib/librte_pdump.so.24.0 00:02:45.704 [463/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:45.704 [464/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:45.704 [465/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:45.704 [466/710] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:45.704 [467/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:45.704 [468/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:45.962 [469/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:45.962 [470/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:45.962 [471/710] Linking static target lib/librte_table.a 00:02:46.219 [472/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:46.219 [473/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:46.785 [474/710] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:46.785 [475/710] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.785 [476/710] Linking target lib/librte_table.so.24.0 00:02:46.785 [477/710] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:02:47.045 [478/710] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:47.045 [479/710] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:47.045 [480/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:02:47.304 [481/710] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:47.561 [482/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:47.562 [483/710] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:47.562 [484/710] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:02:47.562 [485/710] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:47.820 [486/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:02:48.078 [487/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:48.078 [488/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:02:48.335 [489/710] Linking static target lib/librte_graph.a 00:02:48.335 [490/710] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:48.335 [491/710] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:48.335 [492/710] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:48.594 [493/710] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:02:48.852 [494/710] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.852 [495/710] Linking target lib/librte_graph.so.24.0 00:02:48.852 [496/710] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:02:49.111 [497/710] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:02:49.111 [498/710] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:49.111 [499/710] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:49.369 [500/710] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:02:49.628 [501/710] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:02:49.628 [502/710] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:49.628 [503/710] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:49.628 [504/710] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:02:49.628 [505/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:49.886 [506/710] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:02:49.886 [507/710] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:50.144 [508/710] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:50.404 [509/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:50.404 [510/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:50.404 [511/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:50.404 [512/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:50.404 [513/710] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:02:50.404 [514/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:50.662 [515/710] Linking static target lib/librte_node.a 00:02:50.932 [516/710] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.932 [517/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:50.932 [518/710] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:50.932 [519/710] Linking target lib/librte_node.so.24.0 00:02:50.932 [520/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:50.932 [521/710] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:51.190 [522/710] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:51.190 [523/710] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:51.190 [524/710] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:51.190 [525/710] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:51.190 [526/710] Linking static target drivers/librte_bus_pci.a 00:02:51.190 [527/710] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:51.190 [528/710] Linking static target drivers/librte_bus_vdev.a 00:02:51.448 [529/710] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.448 [530/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:51.448 [531/710] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:51.448 [532/710] Linking target drivers/librte_bus_vdev.so.24.0 00:02:51.448 [533/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:51.448 [534/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:51.706 [535/710] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:02:51.706 [536/710] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.706 [537/710] Linking target drivers/librte_bus_pci.so.24.0 00:02:51.706 [538/710] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:51.706 [539/710] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:51.964 [540/710] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:02:51.964 [541/710] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:51.964 [542/710] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:51.964 [543/710] Linking static target drivers/librte_mempool_ring.a 00:02:51.964 [544/710] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:51.964 [545/710] Linking target drivers/librte_mempool_ring.so.24.0 00:02:52.221 [546/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:52.478 [547/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:52.736 [548/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:52.736 [549/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:52.736 [550/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:52.736 [551/710] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:53.670 [552/710] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:02:53.670 [553/710] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:02:53.670 [554/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:53.670 [555/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:53.928 [556/710] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:53.928 [557/710] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:54.186 [558/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:54.442 [559/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:54.699 [560/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:54.699 [561/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:02:54.699 [562/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:55.264 [563/710] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:02:55.264 [564/710] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:02:55.264 [565/710] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:02:55.566 [566/710] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:55.832 [567/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:55.832 [568/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:56.090 [569/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:56.090 [570/710] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:02:56.090 [571/710] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:02:56.090 [572/710] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:02:56.090 [573/710] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:02:56.090 [574/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:56.090 [575/710] Linking static target lib/librte_vhost.a 00:02:56.655 [576/710] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:02:56.655 [577/710] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:02:56.655 [578/710] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:02:56.655 [579/710] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:02:56.655 [580/710] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:02:56.913 [581/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:56.913 [582/710] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:57.171 [583/710] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:57.171 [584/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:57.171 [585/710] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:57.171 [586/710] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:57.171 [587/710] Linking static target drivers/librte_net_i40e.a 00:02:57.429 [588/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:57.429 [589/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:57.429 [590/710] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.429 [591/710] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:57.429 [592/710] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:57.429 [593/710] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:57.429 [594/710] Linking target lib/librte_vhost.so.24.0 00:02:57.996 [595/710] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.996 [596/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:57.996 [597/710] Linking target drivers/librte_net_i40e.so.24.0 00:02:57.996 [598/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:57.996 [599/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:58.562 [600/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:58.562 [601/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:58.562 [602/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:58.819 [603/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:58.819 [604/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:58.819 [605/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:58.819 [606/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:59.076 [607/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:59.334 [608/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:59.592 [609/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:59.592 [610/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:59.592 [611/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:59.592 [612/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:59.592 [613/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:59.850 [614/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:59.850 [615/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:02:59.850 [616/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:59.850 [617/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:00.414 [618/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:00.414 [619/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:00.414 [620/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:03:00.671 [621/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:00.671 [622/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:00.671 [623/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:01.605 [624/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:01.605 [625/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:01.605 [626/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:01.605 [627/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:01.862 [628/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:01.862 [629/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:01.862 [630/710] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:02.119 [631/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:03:02.119 [632/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:03:02.376 [633/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:02.376 [634/710] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:02.376 [635/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:02.376 [636/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:03:02.633 [637/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:03:02.891 [638/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:03:02.891 [639/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:03:02.891 [640/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:03:03.148 [641/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:03.148 [642/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:03:03.148 [643/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:03.405 [644/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:03:03.405 [645/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:03:03.662 [646/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:03:03.662 [647/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:03.922 [648/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:03.922 [649/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:03.922 [650/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:04.181 [651/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:04.181 [652/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:04.181 [653/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:04.181 [654/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:04.181 [655/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:04.181 [656/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:04.438 [657/710] Linking static target lib/librte_pipeline.a 00:03:04.438 [658/710] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:04.696 [659/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:03:04.696 [660/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:04.954 [661/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:03:04.954 [662/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:05.212 [663/710] Linking target app/dpdk-dumpcap 00:03:05.212 [664/710] Linking target app/dpdk-graph 00:03:05.212 [665/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:05.212 [666/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:05.212 [667/710] Linking target app/dpdk-pdump 00:03:05.212 [668/710] Linking target app/dpdk-proc-info 00:03:05.470 [669/710] Linking target app/dpdk-test-acl 00:03:05.728 [670/710] Linking target app/dpdk-test-cmdline 00:03:05.728 [671/710] Linking target app/dpdk-test-bbdev 00:03:05.728 [672/710] Linking target app/dpdk-test-compress-perf 00:03:05.728 [673/710] Linking target app/dpdk-test-crypto-perf 00:03:05.728 [674/710] Linking target app/dpdk-test-dma-perf 00:03:05.728 [675/710] Linking target app/dpdk-test-eventdev 00:03:05.986 [676/710] Linking target app/dpdk-test-fib 00:03:06.245 [677/710] Linking target app/dpdk-test-flow-perf 00:03:06.245 [678/710] Linking target app/dpdk-test-mldev 00:03:06.245 [679/710] Linking target app/dpdk-test-gpudev 00:03:06.245 [680/710] Linking target app/dpdk-test-pipeline 00:03:06.504 [681/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:06.763 [682/710] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:07.022 [683/710] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:07.022 [684/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:07.022 [685/710] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:07.022 [686/710] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:07.022 [687/710] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.022 [688/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:07.282 [689/710] Linking target lib/librte_pipeline.so.24.0 00:03:07.541 [690/710] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:07.541 [691/710] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:07.541 [692/710] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:03:07.801 [693/710] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:08.060 [694/710] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:08.319 [695/710] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:08.319 [696/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:08.319 [697/710] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:08.578 [698/710] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:08.836 [699/710] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:08.836 [700/710] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:08.836 [701/710] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:09.095 [702/710] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:09.095 [703/710] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:09.095 [704/710] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:09.095 [705/710] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:09.354 [706/710] Linking target app/dpdk-test-regex 00:03:09.612 [707/710] Linking target app/dpdk-test-sad 00:03:09.612 [708/710] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:09.871 [709/710] Linking target app/dpdk-testpmd 00:03:10.130 [710/710] Linking target app/dpdk-test-security-perf 00:03:10.130 20:02:59 build_native_dpdk -- common/autobuild_common.sh@187 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:10.130 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:10.130 [0/1] Installing files. 00:03:10.390 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:10.390 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:10.390 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:10.390 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:10.390 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:10.390 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:10.390 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:10.390 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:10.390 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:10.390 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:10.390 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:10.390 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:10.390 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:10.390 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:10.390 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:10.390 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:10.390 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:10.390 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:10.390 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:10.390 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:10.390 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:10.390 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:10.390 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:10.390 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:10.390 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:10.390 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:10.390 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:10.390 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:10.390 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:10.390 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:10.390 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:10.390 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:10.390 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:10.390 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:10.390 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:10.390 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:10.390 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:10.390 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:10.390 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:10.390 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:10.390 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:10.390 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:10.390 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:10.390 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:10.390 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:10.390 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:10.390 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:10.390 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:10.390 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:10.390 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:10.390 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:10.390 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:10.390 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:10.390 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:10.390 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:10.390 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:10.390 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:10.390 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:10.390 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:10.390 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:10.390 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:10.390 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.390 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.390 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.390 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.390 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.390 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.390 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.390 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.390 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.390 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.390 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.390 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.390 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.390 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.390 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.390 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.390 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.390 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.390 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.390 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.390 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.391 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.391 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.391 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.391 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.391 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.391 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.391 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:10.391 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:10.391 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:10.391 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:10.391 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:10.391 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:10.391 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:10.391 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:10.391 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:10.391 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:10.391 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.391 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.391 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.391 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.391 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.391 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.391 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.391 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.391 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.391 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.391 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.391 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.391 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.391 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.391 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.391 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.391 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.391 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.391 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.391 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.391 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.391 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.391 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.391 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.391 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.391 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.391 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.391 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.391 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.391 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.391 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.391 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.391 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.391 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.391 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.391 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.391 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.391 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.391 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.391 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.391 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.391 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.391 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.391 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.391 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.391 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.391 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.391 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.391 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.391 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.391 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.391 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.391 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.391 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.391 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:10.391 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:10.391 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:10.391 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:10.391 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:10.391 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:10.391 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:10.391 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:10.391 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:10.391 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:10.391 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:10.391 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:10.391 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:10.391 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:10.391 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:10.391 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:10.391 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:10.391 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:10.391 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:10.391 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:10.391 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:10.392 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:10.392 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:10.392 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:10.392 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:10.392 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:10.392 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:10.392 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:10.392 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:10.392 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:10.392 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:10.392 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:10.392 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:10.392 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:10.392 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:10.392 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:10.392 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:10.392 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.392 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.392 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.392 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.392 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.392 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.392 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.392 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.392 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.392 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.392 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.392 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.392 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.392 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.392 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.392 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.392 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.392 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.392 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.392 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.392 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.392 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.392 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.392 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.392 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.392 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.392 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.392 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.392 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.392 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.392 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.392 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.392 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.392 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:10.392 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:10.652 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:10.652 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:10.652 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:10.652 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:10.652 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:10.652 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:10.652 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:10.652 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:10.652 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:10.652 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:10.652 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:10.652 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:10.652 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:10.652 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:10.652 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:10.652 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:10.652 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:10.652 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:10.652 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:10.652 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:10.652 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:10.652 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:10.652 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:10.652 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:10.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:10.653 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:10.653 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:10.653 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:10.653 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:10.653 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:10.653 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:10.653 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:10.653 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:10.653 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:10.653 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:10.653 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:10.653 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:10.653 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.653 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.653 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.653 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.653 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.653 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.653 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.653 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.653 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.653 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.653 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.653 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.653 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.653 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.653 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.653 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.653 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.653 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.653 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.653 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.653 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.653 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.653 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.653 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.653 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.653 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.653 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.653 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.653 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.653 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.653 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.653 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.653 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.653 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.653 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.653 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.653 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.653 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.653 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.653 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.653 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.653 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:10.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:10.653 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:10.653 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:10.653 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:10.653 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:10.653 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:10.653 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:10.653 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:10.653 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:10.653 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:10.653 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:10.653 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:10.653 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:10.653 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:10.653 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:10.653 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:10.653 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:10.653 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:10.653 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:10.653 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:10.653 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:10.653 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:10.653 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:10.653 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:10.653 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:10.653 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:10.653 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:10.653 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:10.653 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:10.653 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:10.653 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:10.653 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:10.653 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:10.653 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:10.653 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:10.654 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:10.654 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:10.654 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:10.654 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:10.654 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:10.654 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:10.654 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:10.654 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:10.654 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:10.654 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:10.654 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:10.654 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:10.654 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:10.654 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:10.654 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:10.654 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:10.654 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:10.654 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:10.654 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:10.654 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:10.654 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:10.654 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:10.654 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:10.654 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:10.654 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:10.654 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:10.654 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:10.654 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:10.654 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:10.654 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:10.654 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:10.654 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:10.654 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:10.654 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:10.654 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:10.654 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:10.654 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:10.654 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:10.654 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:10.654 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:10.654 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:10.654 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:10.654 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:10.654 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:10.654 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.654 Installing lib/librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.654 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.654 Installing lib/librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.654 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.654 Installing lib/librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.654 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.654 Installing lib/librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.654 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.654 Installing lib/librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.654 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.654 Installing lib/librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.654 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.654 Installing lib/librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.654 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.654 Installing lib/librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.654 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.654 Installing lib/librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.654 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.654 Installing lib/librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.654 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.654 Installing lib/librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.654 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.654 Installing lib/librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.654 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.654 Installing lib/librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.654 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.654 Installing lib/librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.654 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.654 Installing lib/librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.654 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.654 Installing lib/librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.654 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.654 Installing lib/librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.654 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.654 Installing lib/librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.654 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.654 Installing lib/librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.654 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.654 Installing lib/librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.654 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.654 Installing lib/librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.654 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.654 Installing lib/librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.654 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.654 Installing lib/librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.654 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.654 Installing lib/librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.654 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.654 Installing lib/librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.654 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.654 Installing lib/librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.654 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.654 Installing lib/librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.654 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.654 Installing lib/librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.654 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.654 Installing lib/librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.654 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.654 Installing lib/librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.654 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.654 Installing lib/librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.654 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.654 Installing lib/librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.654 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.654 Installing lib/librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.654 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.654 Installing lib/librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.654 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.654 Installing lib/librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.654 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.654 Installing lib/librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.654 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.654 Installing lib/librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.654 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.654 Installing lib/librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.654 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.655 Installing lib/librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.655 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.655 Installing lib/librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.655 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.655 Installing lib/librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.655 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.655 Installing lib/librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.655 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.655 Installing lib/librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.655 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.655 Installing lib/librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.655 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.655 Installing lib/librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.655 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.655 Installing lib/librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.655 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.655 Installing lib/librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.655 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.655 Installing lib/librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.655 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.655 Installing lib/librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.655 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.655 Installing lib/librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.655 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.655 Installing lib/librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.655 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.655 Installing lib/librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.655 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.655 Installing lib/librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.655 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.655 Installing lib/librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.655 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.655 Installing lib/librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.916 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.916 Installing lib/librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.916 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.916 Installing drivers/librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:10.916 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.916 Installing drivers/librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:10.916 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.916 Installing drivers/librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:10.916 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.916 Installing drivers/librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:10.916 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:10.916 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:10.916 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:10.916 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:10.916 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:10.916 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:10.916 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:10.916 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:10.916 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:10.916 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:10.916 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:10.916 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:10.916 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:10.916 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:10.916 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:10.916 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:10.916 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:10.916 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:10.916 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:10.916 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:10.916 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.916 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.916 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.916 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.916 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:10.916 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:10.916 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:10.916 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:10.916 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:10.916 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:10.916 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:10.916 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:10.916 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:10.916 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:10.916 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:10.916 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:10.916 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.916 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.916 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.916 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.916 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.916 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.916 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.916 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.916 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.916 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.916 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.916 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.916 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.916 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.916 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.916 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.916 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.916 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.916 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.916 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.916 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.916 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.916 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.916 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.916 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.916 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.916 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.916 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.916 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.916 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.916 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.916 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.916 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.916 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.916 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.916 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.916 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.916 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.916 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.916 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.916 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.916 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.916 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.916 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.916 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.916 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.916 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.916 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.916 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.916 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.916 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.916 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.916 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.916 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.916 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.916 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.916 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.916 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.916 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.916 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.917 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.918 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.919 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.919 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.919 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.919 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.919 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.919 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.919 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.919 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.919 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.919 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.919 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.919 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.919 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.919 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.919 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.919 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:10.919 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:10.919 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:10.919 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:10.919 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:10.919 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:10.919 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.919 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:10.919 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:10.919 Installing symlink pointing to librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.24 00:03:10.919 Installing symlink pointing to librte_log.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:03:10.919 Installing symlink pointing to librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.24 00:03:10.919 Installing symlink pointing to librte_kvargs.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:10.919 Installing symlink pointing to librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.24 00:03:10.919 Installing symlink pointing to librte_telemetry.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:10.919 Installing symlink pointing to librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.24 00:03:10.919 Installing symlink pointing to librte_eal.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:10.919 Installing symlink pointing to librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.24 00:03:10.919 Installing symlink pointing to librte_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:10.919 Installing symlink pointing to librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.24 00:03:10.919 Installing symlink pointing to librte_rcu.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:10.919 Installing symlink pointing to librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.24 00:03:10.919 Installing symlink pointing to librte_mempool.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:10.919 Installing symlink pointing to librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.24 00:03:10.919 Installing symlink pointing to librte_mbuf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:10.919 Installing symlink pointing to librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.24 00:03:10.919 Installing symlink pointing to librte_net.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:11.179 Installing symlink pointing to librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.24 00:03:11.179 Installing symlink pointing to librte_meter.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:11.179 Installing symlink pointing to librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.24 00:03:11.179 Installing symlink pointing to librte_ethdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:11.179 Installing symlink pointing to librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.24 00:03:11.179 Installing symlink pointing to librte_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:11.179 Installing symlink pointing to librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.24 00:03:11.179 Installing symlink pointing to librte_cmdline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:11.179 Installing symlink pointing to librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.24 00:03:11.179 Installing symlink pointing to librte_metrics.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:11.179 Installing symlink pointing to librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.24 00:03:11.179 Installing symlink pointing to librte_hash.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:11.179 Installing symlink pointing to librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.24 00:03:11.179 Installing symlink pointing to librte_timer.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:11.179 Installing symlink pointing to librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.24 00:03:11.179 Installing symlink pointing to librte_acl.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:11.179 Installing symlink pointing to librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.24 00:03:11.179 Installing symlink pointing to librte_bbdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:11.179 Installing symlink pointing to librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.24 00:03:11.179 Installing symlink pointing to librte_bitratestats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:11.179 Installing symlink pointing to librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.24 00:03:11.179 Installing symlink pointing to librte_bpf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:11.179 Installing symlink pointing to librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.24 00:03:11.179 Installing symlink pointing to librte_cfgfile.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:11.179 Installing symlink pointing to librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.24 00:03:11.179 Installing symlink pointing to librte_compressdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:11.179 Installing symlink pointing to librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.24 00:03:11.179 Installing symlink pointing to librte_cryptodev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:11.179 Installing symlink pointing to librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.24 00:03:11.179 Installing symlink pointing to librte_distributor.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:11.179 Installing symlink pointing to librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.24 00:03:11.180 Installing symlink pointing to librte_dmadev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:11.180 Installing symlink pointing to librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.24 00:03:11.180 Installing symlink pointing to librte_efd.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:11.180 Installing symlink pointing to librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.24 00:03:11.180 Installing symlink pointing to librte_eventdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:11.180 Installing symlink pointing to librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.24 00:03:11.180 Installing symlink pointing to librte_dispatcher.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:03:11.180 Installing symlink pointing to librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.24 00:03:11.180 Installing symlink pointing to librte_gpudev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:11.180 Installing symlink pointing to librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.24 00:03:11.180 Installing symlink pointing to librte_gro.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:11.180 Installing symlink pointing to librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.24 00:03:11.180 Installing symlink pointing to librte_gso.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:11.180 Installing symlink pointing to librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.24 00:03:11.180 Installing symlink pointing to librte_ip_frag.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:11.180 Installing symlink pointing to librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.24 00:03:11.180 Installing symlink pointing to librte_jobstats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:11.180 Installing symlink pointing to librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.24 00:03:11.180 Installing symlink pointing to librte_latencystats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:11.180 Installing symlink pointing to librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.24 00:03:11.180 Installing symlink pointing to librte_lpm.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:11.180 Installing symlink pointing to librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.24 00:03:11.180 Installing symlink pointing to librte_member.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:11.180 Installing symlink pointing to librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.24 00:03:11.180 Installing symlink pointing to librte_pcapng.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:11.180 Installing symlink pointing to librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.24 00:03:11.180 Installing symlink pointing to librte_power.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:11.180 Installing symlink pointing to librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.24 00:03:11.180 Installing symlink pointing to librte_rawdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:11.180 Installing symlink pointing to librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.24 00:03:11.180 Installing symlink pointing to librte_regexdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:11.180 Installing symlink pointing to librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.24 00:03:11.180 Installing symlink pointing to librte_mldev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:03:11.180 Installing symlink pointing to librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.24 00:03:11.180 Installing symlink pointing to librte_rib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:11.180 Installing symlink pointing to librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.24 00:03:11.180 Installing symlink pointing to librte_reorder.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:11.180 Installing symlink pointing to librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.24 00:03:11.180 Installing symlink pointing to librte_sched.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:11.180 Installing symlink pointing to librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.24 00:03:11.180 Installing symlink pointing to librte_security.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:11.180 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:03:11.180 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:03:11.180 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:03:11.180 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:03:11.180 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:03:11.180 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:03:11.180 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:03:11.180 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:03:11.180 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:03:11.180 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:03:11.180 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:03:11.180 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:03:11.180 Installing symlink pointing to librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.24 00:03:11.180 Installing symlink pointing to librte_stack.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:11.180 Installing symlink pointing to librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.24 00:03:11.180 Installing symlink pointing to librte_vhost.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:11.180 Installing symlink pointing to librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.24 00:03:11.180 Installing symlink pointing to librte_ipsec.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:11.180 Installing symlink pointing to librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.24 00:03:11.180 Installing symlink pointing to librte_pdcp.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:03:11.180 Installing symlink pointing to librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.24 00:03:11.180 Installing symlink pointing to librte_fib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:11.180 Installing symlink pointing to librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.24 00:03:11.180 Installing symlink pointing to librte_port.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:11.180 Installing symlink pointing to librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.24 00:03:11.180 Installing symlink pointing to librte_pdump.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:11.180 Installing symlink pointing to librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.24 00:03:11.180 Installing symlink pointing to librte_table.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:11.180 Installing symlink pointing to librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.24 00:03:11.180 Installing symlink pointing to librte_pipeline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:11.180 Installing symlink pointing to librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.24 00:03:11.180 Installing symlink pointing to librte_graph.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:11.180 Installing symlink pointing to librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.24 00:03:11.180 Installing symlink pointing to librte_node.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:11.180 Installing symlink pointing to librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:03:11.180 Installing symlink pointing to librte_bus_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:03:11.180 Installing symlink pointing to librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:03:11.180 Installing symlink pointing to librte_bus_vdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:03:11.180 Installing symlink pointing to librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:03:11.180 Installing symlink pointing to librte_mempool_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:03:11.180 Installing symlink pointing to librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:03:11.180 Installing symlink pointing to librte_net_i40e.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:03:11.180 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:03:11.180 20:03:00 build_native_dpdk -- common/autobuild_common.sh@189 -- $ uname -s 00:03:11.180 20:03:00 build_native_dpdk -- common/autobuild_common.sh@189 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:11.180 20:03:00 build_native_dpdk -- common/autobuild_common.sh@200 -- $ cat 00:03:11.180 20:03:00 build_native_dpdk -- common/autobuild_common.sh@205 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:11.180 00:03:11.180 real 1m0.672s 00:03:11.180 user 7m21.831s 00:03:11.180 sys 1m10.296s 00:03:11.180 20:03:00 build_native_dpdk -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:03:11.180 20:03:00 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:11.180 ************************************ 00:03:11.180 END TEST build_native_dpdk 00:03:11.180 ************************************ 00:03:11.180 20:03:00 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:11.180 20:03:00 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:11.180 20:03:00 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:11.180 20:03:00 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:11.180 20:03:00 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:11.180 20:03:00 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:11.180 20:03:00 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:11.180 20:03:00 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang --with-shared 00:03:11.180 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:11.439 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.439 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:11.439 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:11.697 Using 'verbs' RDMA provider 00:03:27.992 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:40.192 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:40.192 go version go1.21.1 linux/amd64 00:03:40.192 Creating mk/config.mk...done. 00:03:40.192 Creating mk/cc.flags.mk...done. 00:03:40.192 Type 'make' to build. 00:03:40.192 20:03:27 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:03:40.192 20:03:27 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:03:40.192 20:03:27 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:03:40.192 20:03:27 -- common/autotest_common.sh@10 -- $ set +x 00:03:40.192 ************************************ 00:03:40.192 START TEST make 00:03:40.192 ************************************ 00:03:40.192 20:03:27 make -- common/autotest_common.sh@1121 -- $ make -j10 00:03:40.192 make[1]: Nothing to be done for 'all'. 00:04:06.721 CC lib/ut/ut.o 00:04:06.721 CC lib/ut_mock/mock.o 00:04:06.721 CC lib/log/log_flags.o 00:04:06.721 CC lib/log/log.o 00:04:06.721 CC lib/log/log_deprecated.o 00:04:06.721 LIB libspdk_ut_mock.a 00:04:06.721 LIB libspdk_log.a 00:04:06.721 LIB libspdk_ut.a 00:04:06.721 SO libspdk_ut.so.2.0 00:04:06.721 SO libspdk_ut_mock.so.6.0 00:04:06.721 SO libspdk_log.so.7.0 00:04:06.721 SYMLINK libspdk_ut_mock.so 00:04:06.721 SYMLINK libspdk_ut.so 00:04:06.721 SYMLINK libspdk_log.so 00:04:06.721 CC lib/dma/dma.o 00:04:06.721 CC lib/ioat/ioat.o 00:04:06.721 CC lib/util/base64.o 00:04:06.721 CC lib/util/bit_array.o 00:04:06.722 CC lib/util/cpuset.o 00:04:06.722 CC lib/util/crc16.o 00:04:06.722 CC lib/util/crc32.o 00:04:06.722 CC lib/util/crc32c.o 00:04:06.722 CXX lib/trace_parser/trace.o 00:04:06.722 CC lib/vfio_user/host/vfio_user_pci.o 00:04:06.722 CC lib/util/crc32_ieee.o 00:04:06.722 CC lib/util/crc64.o 00:04:06.722 CC lib/util/dif.o 00:04:06.722 LIB libspdk_dma.a 00:04:06.722 CC lib/util/fd.o 00:04:06.722 CC lib/vfio_user/host/vfio_user.o 00:04:06.722 SO libspdk_dma.so.4.0 00:04:06.722 CC lib/util/file.o 00:04:06.722 LIB libspdk_ioat.a 00:04:06.722 CC lib/util/hexlify.o 00:04:06.722 CC lib/util/iov.o 00:04:06.722 SYMLINK libspdk_dma.so 00:04:06.722 CC lib/util/math.o 00:04:06.722 SO libspdk_ioat.so.7.0 00:04:06.722 CC lib/util/pipe.o 00:04:06.722 CC lib/util/strerror_tls.o 00:04:06.722 CC lib/util/string.o 00:04:06.722 SYMLINK libspdk_ioat.so 00:04:06.722 CC lib/util/uuid.o 00:04:06.722 CC lib/util/fd_group.o 00:04:06.722 LIB libspdk_vfio_user.a 00:04:06.722 CC lib/util/xor.o 00:04:06.722 SO libspdk_vfio_user.so.5.0 00:04:06.722 CC lib/util/zipf.o 00:04:06.722 SYMLINK libspdk_vfio_user.so 00:04:06.722 LIB libspdk_util.a 00:04:06.722 SO libspdk_util.so.9.0 00:04:06.722 SYMLINK libspdk_util.so 00:04:06.722 LIB libspdk_trace_parser.a 00:04:06.722 SO libspdk_trace_parser.so.5.0 00:04:06.722 SYMLINK libspdk_trace_parser.so 00:04:06.722 CC lib/idxd/idxd.o 00:04:06.722 CC lib/idxd/idxd_user.o 00:04:06.722 CC lib/idxd/idxd_kernel.o 00:04:06.722 CC lib/rdma/common.o 00:04:06.722 CC lib/conf/conf.o 00:04:06.722 CC lib/rdma/rdma_verbs.o 00:04:06.722 CC lib/vmd/vmd.o 00:04:06.722 CC lib/env_dpdk/env.o 00:04:06.722 CC lib/vmd/led.o 00:04:06.722 CC lib/json/json_parse.o 00:04:06.722 CC lib/env_dpdk/memory.o 00:04:06.722 CC lib/env_dpdk/pci.o 00:04:06.722 CC lib/env_dpdk/init.o 00:04:06.722 CC lib/env_dpdk/threads.o 00:04:06.722 LIB libspdk_conf.a 00:04:06.722 CC lib/json/json_util.o 00:04:06.722 SO libspdk_conf.so.6.0 00:04:06.722 LIB libspdk_rdma.a 00:04:06.722 SO libspdk_rdma.so.6.0 00:04:06.722 SYMLINK libspdk_conf.so 00:04:06.722 CC lib/json/json_write.o 00:04:06.722 CC lib/env_dpdk/pci_ioat.o 00:04:06.722 SYMLINK libspdk_rdma.so 00:04:06.722 CC lib/env_dpdk/pci_virtio.o 00:04:06.722 CC lib/env_dpdk/pci_vmd.o 00:04:06.722 CC lib/env_dpdk/pci_idxd.o 00:04:06.722 LIB libspdk_idxd.a 00:04:06.722 CC lib/env_dpdk/pci_event.o 00:04:06.722 CC lib/env_dpdk/sigbus_handler.o 00:04:06.722 SO libspdk_idxd.so.12.0 00:04:06.722 CC lib/env_dpdk/pci_dpdk.o 00:04:06.722 LIB libspdk_vmd.a 00:04:06.722 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:06.722 SYMLINK libspdk_idxd.so 00:04:06.722 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:06.722 LIB libspdk_json.a 00:04:06.722 SO libspdk_vmd.so.6.0 00:04:06.722 SO libspdk_json.so.6.0 00:04:06.722 SYMLINK libspdk_vmd.so 00:04:06.722 SYMLINK libspdk_json.so 00:04:06.722 CC lib/jsonrpc/jsonrpc_server.o 00:04:06.722 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:06.722 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:06.722 CC lib/jsonrpc/jsonrpc_client.o 00:04:06.722 LIB libspdk_jsonrpc.a 00:04:06.722 SO libspdk_jsonrpc.so.6.0 00:04:06.722 SYMLINK libspdk_jsonrpc.so 00:04:06.722 LIB libspdk_env_dpdk.a 00:04:06.722 SO libspdk_env_dpdk.so.14.0 00:04:06.722 CC lib/rpc/rpc.o 00:04:06.722 SYMLINK libspdk_env_dpdk.so 00:04:06.979 LIB libspdk_rpc.a 00:04:06.979 SO libspdk_rpc.so.6.0 00:04:06.979 SYMLINK libspdk_rpc.so 00:04:07.237 CC lib/notify/notify.o 00:04:07.237 CC lib/notify/notify_rpc.o 00:04:07.237 CC lib/trace/trace.o 00:04:07.237 CC lib/trace/trace_flags.o 00:04:07.237 CC lib/keyring/keyring.o 00:04:07.237 CC lib/trace/trace_rpc.o 00:04:07.237 CC lib/keyring/keyring_rpc.o 00:04:07.494 LIB libspdk_notify.a 00:04:07.494 SO libspdk_notify.so.6.0 00:04:07.494 LIB libspdk_keyring.a 00:04:07.494 SO libspdk_keyring.so.1.0 00:04:07.494 SYMLINK libspdk_notify.so 00:04:07.494 LIB libspdk_trace.a 00:04:07.494 SO libspdk_trace.so.10.0 00:04:07.494 SYMLINK libspdk_keyring.so 00:04:07.752 SYMLINK libspdk_trace.so 00:04:08.010 CC lib/thread/thread.o 00:04:08.010 CC lib/thread/iobuf.o 00:04:08.010 CC lib/sock/sock.o 00:04:08.010 CC lib/sock/sock_rpc.o 00:04:08.269 LIB libspdk_sock.a 00:04:08.269 SO libspdk_sock.so.9.0 00:04:08.526 SYMLINK libspdk_sock.so 00:04:08.805 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:08.805 CC lib/nvme/nvme_ctrlr.o 00:04:08.805 CC lib/nvme/nvme_fabric.o 00:04:08.805 CC lib/nvme/nvme_ns_cmd.o 00:04:08.805 CC lib/nvme/nvme_ns.o 00:04:08.805 CC lib/nvme/nvme_pcie_common.o 00:04:08.805 CC lib/nvme/nvme_pcie.o 00:04:08.805 CC lib/nvme/nvme.o 00:04:08.805 CC lib/nvme/nvme_qpair.o 00:04:09.370 LIB libspdk_thread.a 00:04:09.370 SO libspdk_thread.so.10.0 00:04:09.627 CC lib/nvme/nvme_quirks.o 00:04:09.627 CC lib/nvme/nvme_transport.o 00:04:09.627 SYMLINK libspdk_thread.so 00:04:09.627 CC lib/nvme/nvme_discovery.o 00:04:09.627 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:09.627 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:09.627 CC lib/accel/accel.o 00:04:09.627 CC lib/nvme/nvme_tcp.o 00:04:09.885 CC lib/nvme/nvme_opal.o 00:04:09.885 CC lib/blob/blobstore.o 00:04:10.142 CC lib/accel/accel_rpc.o 00:04:10.142 CC lib/init/json_config.o 00:04:10.399 CC lib/init/subsystem.o 00:04:10.399 CC lib/init/subsystem_rpc.o 00:04:10.399 CC lib/blob/request.o 00:04:10.399 CC lib/virtio/virtio.o 00:04:10.399 CC lib/virtio/virtio_vhost_user.o 00:04:10.399 CC lib/accel/accel_sw.o 00:04:10.399 CC lib/blob/zeroes.o 00:04:10.399 CC lib/init/rpc.o 00:04:10.657 CC lib/blob/blob_bs_dev.o 00:04:10.657 CC lib/nvme/nvme_io_msg.o 00:04:10.657 LIB libspdk_init.a 00:04:10.657 CC lib/virtio/virtio_vfio_user.o 00:04:10.657 SO libspdk_init.so.5.0 00:04:10.657 CC lib/virtio/virtio_pci.o 00:04:10.657 CC lib/nvme/nvme_poll_group.o 00:04:10.657 LIB libspdk_accel.a 00:04:10.657 SYMLINK libspdk_init.so 00:04:10.657 CC lib/nvme/nvme_zns.o 00:04:10.657 SO libspdk_accel.so.15.0 00:04:10.914 SYMLINK libspdk_accel.so 00:04:10.914 CC lib/nvme/nvme_stubs.o 00:04:10.914 CC lib/nvme/nvme_auth.o 00:04:10.914 CC lib/nvme/nvme_cuse.o 00:04:10.914 CC lib/nvme/nvme_rdma.o 00:04:10.914 LIB libspdk_virtio.a 00:04:10.914 SO libspdk_virtio.so.7.0 00:04:11.172 SYMLINK libspdk_virtio.so 00:04:11.172 CC lib/event/app.o 00:04:11.429 CC lib/event/reactor.o 00:04:11.429 CC lib/event/log_rpc.o 00:04:11.429 CC lib/event/app_rpc.o 00:04:11.429 CC lib/event/scheduler_static.o 00:04:11.429 CC lib/bdev/bdev.o 00:04:11.429 CC lib/bdev/bdev_rpc.o 00:04:11.686 CC lib/bdev/bdev_zone.o 00:04:11.686 CC lib/bdev/part.o 00:04:11.686 CC lib/bdev/scsi_nvme.o 00:04:11.686 LIB libspdk_event.a 00:04:11.686 SO libspdk_event.so.13.0 00:04:11.943 SYMLINK libspdk_event.so 00:04:12.201 LIB libspdk_nvme.a 00:04:12.460 SO libspdk_nvme.so.13.0 00:04:12.718 SYMLINK libspdk_nvme.so 00:04:12.718 LIB libspdk_blob.a 00:04:12.718 SO libspdk_blob.so.11.0 00:04:12.976 SYMLINK libspdk_blob.so 00:04:13.257 CC lib/lvol/lvol.o 00:04:13.257 CC lib/blobfs/blobfs.o 00:04:13.257 CC lib/blobfs/tree.o 00:04:13.871 LIB libspdk_bdev.a 00:04:13.871 LIB libspdk_blobfs.a 00:04:13.871 SO libspdk_bdev.so.15.0 00:04:14.128 SO libspdk_blobfs.so.10.0 00:04:14.128 LIB libspdk_lvol.a 00:04:14.128 SO libspdk_lvol.so.10.0 00:04:14.128 SYMLINK libspdk_blobfs.so 00:04:14.128 SYMLINK libspdk_bdev.so 00:04:14.128 SYMLINK libspdk_lvol.so 00:04:14.385 CC lib/nvmf/ctrlr.o 00:04:14.385 CC lib/scsi/dev.o 00:04:14.385 CC lib/nvmf/ctrlr_bdev.o 00:04:14.385 CC lib/nvmf/ctrlr_discovery.o 00:04:14.385 CC lib/nbd/nbd_rpc.o 00:04:14.385 CC lib/nbd/nbd.o 00:04:14.385 CC lib/ublk/ublk.o 00:04:14.385 CC lib/nvmf/subsystem.o 00:04:14.385 CC lib/scsi/lun.o 00:04:14.385 CC lib/ftl/ftl_core.o 00:04:14.641 CC lib/ftl/ftl_init.o 00:04:14.641 CC lib/ftl/ftl_layout.o 00:04:14.641 CC lib/scsi/port.o 00:04:14.897 LIB libspdk_nbd.a 00:04:14.897 CC lib/scsi/scsi.o 00:04:14.897 CC lib/scsi/scsi_bdev.o 00:04:14.897 CC lib/nvmf/nvmf.o 00:04:14.897 CC lib/ublk/ublk_rpc.o 00:04:14.897 SO libspdk_nbd.so.7.0 00:04:14.897 SYMLINK libspdk_nbd.so 00:04:14.897 CC lib/nvmf/nvmf_rpc.o 00:04:14.897 CC lib/ftl/ftl_debug.o 00:04:14.897 CC lib/scsi/scsi_pr.o 00:04:14.897 CC lib/scsi/scsi_rpc.o 00:04:14.897 LIB libspdk_ublk.a 00:04:14.897 CC lib/nvmf/transport.o 00:04:15.154 SO libspdk_ublk.so.3.0 00:04:15.154 SYMLINK libspdk_ublk.so 00:04:15.154 CC lib/ftl/ftl_io.o 00:04:15.154 CC lib/nvmf/tcp.o 00:04:15.154 CC lib/ftl/ftl_sb.o 00:04:15.154 CC lib/ftl/ftl_l2p.o 00:04:15.410 CC lib/scsi/task.o 00:04:15.410 CC lib/ftl/ftl_l2p_flat.o 00:04:15.410 CC lib/ftl/ftl_nv_cache.o 00:04:15.410 CC lib/nvmf/stubs.o 00:04:15.410 LIB libspdk_scsi.a 00:04:15.666 CC lib/nvmf/mdns_server.o 00:04:15.666 CC lib/nvmf/rdma.o 00:04:15.666 SO libspdk_scsi.so.9.0 00:04:15.666 CC lib/nvmf/auth.o 00:04:15.666 CC lib/ftl/ftl_band.o 00:04:15.666 SYMLINK libspdk_scsi.so 00:04:15.666 CC lib/ftl/ftl_band_ops.o 00:04:15.666 CC lib/ftl/ftl_writer.o 00:04:15.923 CC lib/ftl/ftl_rq.o 00:04:15.923 CC lib/ftl/ftl_reloc.o 00:04:15.923 CC lib/ftl/ftl_l2p_cache.o 00:04:15.923 CC lib/ftl/ftl_p2l.o 00:04:15.923 CC lib/ftl/mngt/ftl_mngt.o 00:04:15.923 CC lib/iscsi/conn.o 00:04:16.180 CC lib/iscsi/init_grp.o 00:04:16.180 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:16.180 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:16.437 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:16.437 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:16.437 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:16.437 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:16.437 CC lib/iscsi/iscsi.o 00:04:16.437 CC lib/iscsi/md5.o 00:04:16.437 CC lib/iscsi/param.o 00:04:16.693 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:16.693 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:16.693 CC lib/iscsi/portal_grp.o 00:04:16.693 CC lib/iscsi/tgt_node.o 00:04:16.693 CC lib/iscsi/iscsi_subsystem.o 00:04:16.693 CC lib/vhost/vhost.o 00:04:16.693 CC lib/iscsi/iscsi_rpc.o 00:04:16.693 CC lib/vhost/vhost_rpc.o 00:04:16.951 CC lib/iscsi/task.o 00:04:16.951 CC lib/vhost/vhost_scsi.o 00:04:16.951 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:16.951 CC lib/vhost/vhost_blk.o 00:04:16.951 CC lib/vhost/rte_vhost_user.o 00:04:17.210 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:17.210 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:17.210 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:17.210 CC lib/ftl/utils/ftl_conf.o 00:04:17.467 CC lib/ftl/utils/ftl_md.o 00:04:17.467 CC lib/ftl/utils/ftl_mempool.o 00:04:17.467 CC lib/ftl/utils/ftl_bitmap.o 00:04:17.467 LIB libspdk_nvmf.a 00:04:17.467 CC lib/ftl/utils/ftl_property.o 00:04:17.467 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:17.467 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:17.467 SO libspdk_nvmf.so.18.0 00:04:17.725 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:17.725 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:17.725 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:17.725 LIB libspdk_iscsi.a 00:04:17.725 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:17.725 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:17.725 SYMLINK libspdk_nvmf.so 00:04:17.725 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:17.725 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:17.725 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:17.983 SO libspdk_iscsi.so.8.0 00:04:17.983 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:17.983 CC lib/ftl/base/ftl_base_dev.o 00:04:17.983 CC lib/ftl/base/ftl_base_bdev.o 00:04:17.983 CC lib/ftl/ftl_trace.o 00:04:17.983 SYMLINK libspdk_iscsi.so 00:04:17.983 LIB libspdk_vhost.a 00:04:18.241 SO libspdk_vhost.so.8.0 00:04:18.241 LIB libspdk_ftl.a 00:04:18.241 SYMLINK libspdk_vhost.so 00:04:18.499 SO libspdk_ftl.so.9.0 00:04:18.757 SYMLINK libspdk_ftl.so 00:04:19.323 CC module/env_dpdk/env_dpdk_rpc.o 00:04:19.323 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:19.323 CC module/accel/dsa/accel_dsa.o 00:04:19.323 CC module/blob/bdev/blob_bdev.o 00:04:19.323 CC module/accel/ioat/accel_ioat.o 00:04:19.323 CC module/accel/iaa/accel_iaa.o 00:04:19.323 CC module/sock/posix/posix.o 00:04:19.323 CC module/accel/error/accel_error.o 00:04:19.323 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:19.323 CC module/keyring/file/keyring.o 00:04:19.323 LIB libspdk_env_dpdk_rpc.a 00:04:19.323 CC module/keyring/file/keyring_rpc.o 00:04:19.582 LIB libspdk_scheduler_dpdk_governor.a 00:04:19.583 SO libspdk_env_dpdk_rpc.so.6.0 00:04:19.583 LIB libspdk_scheduler_dynamic.a 00:04:19.583 CC module/accel/error/accel_error_rpc.o 00:04:19.583 CC module/accel/ioat/accel_ioat_rpc.o 00:04:19.583 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:19.583 SO libspdk_scheduler_dynamic.so.4.0 00:04:19.583 CC module/accel/iaa/accel_iaa_rpc.o 00:04:19.583 CC module/accel/dsa/accel_dsa_rpc.o 00:04:19.583 SYMLINK libspdk_env_dpdk_rpc.so 00:04:19.583 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:19.583 LIB libspdk_keyring_file.a 00:04:19.583 LIB libspdk_blob_bdev.a 00:04:19.583 SYMLINK libspdk_scheduler_dynamic.so 00:04:19.583 SO libspdk_keyring_file.so.1.0 00:04:19.583 SO libspdk_blob_bdev.so.11.0 00:04:19.583 LIB libspdk_accel_error.a 00:04:19.583 LIB libspdk_accel_ioat.a 00:04:19.583 SYMLINK libspdk_keyring_file.so 00:04:19.583 SYMLINK libspdk_blob_bdev.so 00:04:19.583 LIB libspdk_accel_iaa.a 00:04:19.583 SO libspdk_accel_error.so.2.0 00:04:19.583 LIB libspdk_accel_dsa.a 00:04:19.583 SO libspdk_accel_ioat.so.6.0 00:04:19.583 SO libspdk_accel_iaa.so.3.0 00:04:19.583 SO libspdk_accel_dsa.so.5.0 00:04:19.841 SYMLINK libspdk_accel_error.so 00:04:19.841 SYMLINK libspdk_accel_ioat.so 00:04:19.841 CC module/keyring/linux/keyring.o 00:04:19.841 CC module/keyring/linux/keyring_rpc.o 00:04:19.841 CC module/scheduler/gscheduler/gscheduler.o 00:04:19.841 SYMLINK libspdk_accel_iaa.so 00:04:19.841 SYMLINK libspdk_accel_dsa.so 00:04:19.841 LIB libspdk_keyring_linux.a 00:04:19.841 LIB libspdk_scheduler_gscheduler.a 00:04:19.841 CC module/bdev/error/vbdev_error.o 00:04:19.841 CC module/bdev/gpt/gpt.o 00:04:19.841 CC module/bdev/delay/vbdev_delay.o 00:04:19.841 SO libspdk_keyring_linux.so.1.0 00:04:19.841 SO libspdk_scheduler_gscheduler.so.4.0 00:04:19.841 CC module/blobfs/bdev/blobfs_bdev.o 00:04:19.841 CC module/bdev/malloc/bdev_malloc.o 00:04:19.841 CC module/bdev/lvol/vbdev_lvol.o 00:04:20.099 LIB libspdk_sock_posix.a 00:04:20.099 SYMLINK libspdk_scheduler_gscheduler.so 00:04:20.099 SYMLINK libspdk_keyring_linux.so 00:04:20.099 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:20.099 CC module/bdev/gpt/vbdev_gpt.o 00:04:20.099 SO libspdk_sock_posix.so.6.0 00:04:20.099 CC module/bdev/null/bdev_null.o 00:04:20.099 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:20.099 SYMLINK libspdk_sock_posix.so 00:04:20.099 CC module/bdev/null/bdev_null_rpc.o 00:04:20.099 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:20.099 CC module/bdev/error/vbdev_error_rpc.o 00:04:20.358 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:20.358 LIB libspdk_bdev_malloc.a 00:04:20.358 LIB libspdk_bdev_gpt.a 00:04:20.358 LIB libspdk_bdev_null.a 00:04:20.358 LIB libspdk_blobfs_bdev.a 00:04:20.358 SO libspdk_bdev_malloc.so.6.0 00:04:20.358 SO libspdk_bdev_gpt.so.6.0 00:04:20.358 SO libspdk_bdev_null.so.6.0 00:04:20.358 SO libspdk_blobfs_bdev.so.6.0 00:04:20.358 LIB libspdk_bdev_error.a 00:04:20.358 SYMLINK libspdk_bdev_gpt.so 00:04:20.358 SYMLINK libspdk_bdev_malloc.so 00:04:20.358 SYMLINK libspdk_bdev_null.so 00:04:20.358 SYMLINK libspdk_blobfs_bdev.so 00:04:20.358 SO libspdk_bdev_error.so.6.0 00:04:20.358 LIB libspdk_bdev_lvol.a 00:04:20.358 LIB libspdk_bdev_delay.a 00:04:20.358 SO libspdk_bdev_lvol.so.6.0 00:04:20.358 SYMLINK libspdk_bdev_error.so 00:04:20.358 CC module/bdev/nvme/bdev_nvme.o 00:04:20.358 SO libspdk_bdev_delay.so.6.0 00:04:20.616 CC module/bdev/passthru/vbdev_passthru.o 00:04:20.616 SYMLINK libspdk_bdev_lvol.so 00:04:20.616 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:20.616 SYMLINK libspdk_bdev_delay.so 00:04:20.616 CC module/bdev/nvme/nvme_rpc.o 00:04:20.616 CC module/bdev/raid/bdev_raid.o 00:04:20.616 CC module/bdev/split/vbdev_split.o 00:04:20.616 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:20.616 CC module/bdev/aio/bdev_aio.o 00:04:20.616 CC module/bdev/ftl/bdev_ftl.o 00:04:20.616 CC module/bdev/iscsi/bdev_iscsi.o 00:04:20.875 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:20.875 CC module/bdev/split/vbdev_split_rpc.o 00:04:20.875 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:20.875 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:20.875 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:20.875 CC module/bdev/aio/bdev_aio_rpc.o 00:04:20.875 LIB libspdk_bdev_passthru.a 00:04:20.875 SO libspdk_bdev_passthru.so.6.0 00:04:20.875 LIB libspdk_bdev_split.a 00:04:20.875 CC module/bdev/nvme/bdev_mdns_client.o 00:04:21.133 LIB libspdk_bdev_iscsi.a 00:04:21.133 SO libspdk_bdev_split.so.6.0 00:04:21.133 SO libspdk_bdev_iscsi.so.6.0 00:04:21.133 SYMLINK libspdk_bdev_passthru.so 00:04:21.133 CC module/bdev/nvme/vbdev_opal.o 00:04:21.133 LIB libspdk_bdev_zone_block.a 00:04:21.133 SYMLINK libspdk_bdev_split.so 00:04:21.133 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:21.133 LIB libspdk_bdev_aio.a 00:04:21.133 SYMLINK libspdk_bdev_iscsi.so 00:04:21.133 SO libspdk_bdev_zone_block.so.6.0 00:04:21.133 LIB libspdk_bdev_ftl.a 00:04:21.133 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:21.133 SO libspdk_bdev_aio.so.6.0 00:04:21.133 SO libspdk_bdev_ftl.so.6.0 00:04:21.133 SYMLINK libspdk_bdev_zone_block.so 00:04:21.133 SYMLINK libspdk_bdev_aio.so 00:04:21.133 CC module/bdev/raid/bdev_raid_rpc.o 00:04:21.133 CC module/bdev/raid/bdev_raid_sb.o 00:04:21.133 SYMLINK libspdk_bdev_ftl.so 00:04:21.133 CC module/bdev/raid/raid0.o 00:04:21.391 CC module/bdev/raid/raid1.o 00:04:21.391 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:21.391 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:21.391 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:21.391 CC module/bdev/raid/concat.o 00:04:21.650 LIB libspdk_bdev_raid.a 00:04:21.650 SO libspdk_bdev_raid.so.6.0 00:04:21.650 SYMLINK libspdk_bdev_raid.so 00:04:21.908 LIB libspdk_bdev_virtio.a 00:04:21.908 SO libspdk_bdev_virtio.so.6.0 00:04:21.908 SYMLINK libspdk_bdev_virtio.so 00:04:22.844 LIB libspdk_bdev_nvme.a 00:04:22.844 SO libspdk_bdev_nvme.so.7.0 00:04:22.844 SYMLINK libspdk_bdev_nvme.so 00:04:23.412 CC module/event/subsystems/sock/sock.o 00:04:23.412 CC module/event/subsystems/scheduler/scheduler.o 00:04:23.412 CC module/event/subsystems/iobuf/iobuf.o 00:04:23.412 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:23.412 CC module/event/subsystems/vmd/vmd.o 00:04:23.412 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:23.412 CC module/event/subsystems/keyring/keyring.o 00:04:23.412 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:23.672 LIB libspdk_event_vhost_blk.a 00:04:23.672 LIB libspdk_event_scheduler.a 00:04:23.672 LIB libspdk_event_keyring.a 00:04:23.672 LIB libspdk_event_sock.a 00:04:23.672 LIB libspdk_event_vmd.a 00:04:23.672 SO libspdk_event_vhost_blk.so.3.0 00:04:23.672 LIB libspdk_event_iobuf.a 00:04:23.672 SO libspdk_event_scheduler.so.4.0 00:04:23.672 SO libspdk_event_keyring.so.1.0 00:04:23.672 SO libspdk_event_sock.so.5.0 00:04:23.672 SO libspdk_event_vmd.so.6.0 00:04:23.672 SO libspdk_event_iobuf.so.3.0 00:04:23.672 SYMLINK libspdk_event_vhost_blk.so 00:04:23.672 SYMLINK libspdk_event_scheduler.so 00:04:23.672 SYMLINK libspdk_event_keyring.so 00:04:23.672 SYMLINK libspdk_event_sock.so 00:04:23.672 SYMLINK libspdk_event_vmd.so 00:04:23.672 SYMLINK libspdk_event_iobuf.so 00:04:23.931 CC module/event/subsystems/accel/accel.o 00:04:24.190 LIB libspdk_event_accel.a 00:04:24.190 SO libspdk_event_accel.so.6.0 00:04:24.190 SYMLINK libspdk_event_accel.so 00:04:24.448 CC module/event/subsystems/bdev/bdev.o 00:04:24.705 LIB libspdk_event_bdev.a 00:04:24.705 SO libspdk_event_bdev.so.6.0 00:04:24.962 SYMLINK libspdk_event_bdev.so 00:04:24.962 CC module/event/subsystems/ublk/ublk.o 00:04:24.962 CC module/event/subsystems/scsi/scsi.o 00:04:24.962 CC module/event/subsystems/nbd/nbd.o 00:04:25.219 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:25.219 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:25.219 LIB libspdk_event_ublk.a 00:04:25.219 SO libspdk_event_ublk.so.3.0 00:04:25.219 LIB libspdk_event_nbd.a 00:04:25.219 LIB libspdk_event_scsi.a 00:04:25.219 SO libspdk_event_nbd.so.6.0 00:04:25.219 SO libspdk_event_scsi.so.6.0 00:04:25.219 SYMLINK libspdk_event_ublk.so 00:04:25.477 SYMLINK libspdk_event_nbd.so 00:04:25.477 SYMLINK libspdk_event_scsi.so 00:04:25.477 LIB libspdk_event_nvmf.a 00:04:25.477 SO libspdk_event_nvmf.so.6.0 00:04:25.477 SYMLINK libspdk_event_nvmf.so 00:04:25.735 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:25.735 CC module/event/subsystems/iscsi/iscsi.o 00:04:25.735 LIB libspdk_event_vhost_scsi.a 00:04:25.735 LIB libspdk_event_iscsi.a 00:04:25.735 SO libspdk_event_vhost_scsi.so.3.0 00:04:25.735 SO libspdk_event_iscsi.so.6.0 00:04:25.993 SYMLINK libspdk_event_iscsi.so 00:04:25.993 SYMLINK libspdk_event_vhost_scsi.so 00:04:25.993 SO libspdk.so.6.0 00:04:25.993 SYMLINK libspdk.so 00:04:26.251 CXX app/trace/trace.o 00:04:26.509 CC examples/ioat/perf/perf.o 00:04:26.509 CC examples/nvme/hello_world/hello_world.o 00:04:26.509 CC examples/sock/hello_world/hello_sock.o 00:04:26.509 CC examples/accel/perf/accel_perf.o 00:04:26.509 CC examples/blob/hello_world/hello_blob.o 00:04:26.509 CC examples/bdev/hello_world/hello_bdev.o 00:04:26.509 CC test/app/bdev_svc/bdev_svc.o 00:04:26.509 CC test/bdev/bdevio/bdevio.o 00:04:26.509 CC test/accel/dif/dif.o 00:04:26.509 LINK ioat_perf 00:04:26.768 LINK bdev_svc 00:04:26.768 LINK hello_sock 00:04:26.768 LINK hello_world 00:04:26.768 LINK hello_blob 00:04:26.768 LINK hello_bdev 00:04:26.768 LINK spdk_trace 00:04:26.768 CC examples/ioat/verify/verify.o 00:04:26.768 LINK bdevio 00:04:27.026 LINK accel_perf 00:04:27.026 CC examples/nvme/reconnect/reconnect.o 00:04:27.026 LINK dif 00:04:27.026 CC examples/bdev/bdevperf/bdevperf.o 00:04:27.026 CC test/app/histogram_perf/histogram_perf.o 00:04:27.026 CC examples/blob/cli/blobcli.o 00:04:27.026 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:27.026 LINK verify 00:04:27.026 CC app/trace_record/trace_record.o 00:04:27.285 LINK histogram_perf 00:04:27.285 CC test/app/jsoncat/jsoncat.o 00:04:27.285 CC app/nvmf_tgt/nvmf_main.o 00:04:27.285 LINK reconnect 00:04:27.285 CC app/iscsi_tgt/iscsi_tgt.o 00:04:27.285 LINK jsoncat 00:04:27.285 LINK spdk_trace_record 00:04:27.285 CC app/spdk_lspci/spdk_lspci.o 00:04:27.543 CC app/spdk_tgt/spdk_tgt.o 00:04:27.544 LINK nvmf_tgt 00:04:27.544 LINK nvme_fuzz 00:04:27.544 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:27.544 LINK blobcli 00:04:27.544 LINK iscsi_tgt 00:04:27.544 LINK spdk_lspci 00:04:27.544 CC test/app/stub/stub.o 00:04:27.544 LINK spdk_tgt 00:04:27.802 CC examples/vmd/lsvmd/lsvmd.o 00:04:27.802 LINK bdevperf 00:04:27.802 CC examples/vmd/led/led.o 00:04:27.802 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:27.802 LINK stub 00:04:27.802 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:27.802 LINK lsvmd 00:04:27.802 CC app/spdk_nvme_perf/perf.o 00:04:27.802 LINK led 00:04:28.060 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:28.060 CC examples/nvmf/nvmf/nvmf.o 00:04:28.060 LINK nvme_manage 00:04:28.060 CC examples/nvme/arbitration/arbitration.o 00:04:28.060 TEST_HEADER include/spdk/accel.h 00:04:28.060 TEST_HEADER include/spdk/accel_module.h 00:04:28.060 TEST_HEADER include/spdk/assert.h 00:04:28.060 TEST_HEADER include/spdk/barrier.h 00:04:28.060 CC test/blobfs/mkfs/mkfs.o 00:04:28.060 TEST_HEADER include/spdk/base64.h 00:04:28.060 TEST_HEADER include/spdk/bdev.h 00:04:28.060 TEST_HEADER include/spdk/bdev_module.h 00:04:28.060 TEST_HEADER include/spdk/bdev_zone.h 00:04:28.060 TEST_HEADER include/spdk/bit_array.h 00:04:28.060 TEST_HEADER include/spdk/bit_pool.h 00:04:28.060 TEST_HEADER include/spdk/blob_bdev.h 00:04:28.060 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:28.060 TEST_HEADER include/spdk/blobfs.h 00:04:28.060 TEST_HEADER include/spdk/blob.h 00:04:28.061 TEST_HEADER include/spdk/conf.h 00:04:28.061 TEST_HEADER include/spdk/config.h 00:04:28.061 TEST_HEADER include/spdk/cpuset.h 00:04:28.061 TEST_HEADER include/spdk/crc16.h 00:04:28.061 TEST_HEADER include/spdk/crc32.h 00:04:28.061 TEST_HEADER include/spdk/crc64.h 00:04:28.061 TEST_HEADER include/spdk/dif.h 00:04:28.061 TEST_HEADER include/spdk/dma.h 00:04:28.061 TEST_HEADER include/spdk/endian.h 00:04:28.061 TEST_HEADER include/spdk/env_dpdk.h 00:04:28.061 TEST_HEADER include/spdk/env.h 00:04:28.061 TEST_HEADER include/spdk/event.h 00:04:28.061 TEST_HEADER include/spdk/fd_group.h 00:04:28.061 TEST_HEADER include/spdk/fd.h 00:04:28.061 TEST_HEADER include/spdk/file.h 00:04:28.061 TEST_HEADER include/spdk/ftl.h 00:04:28.061 TEST_HEADER include/spdk/gpt_spec.h 00:04:28.061 TEST_HEADER include/spdk/hexlify.h 00:04:28.061 TEST_HEADER include/spdk/histogram_data.h 00:04:28.061 TEST_HEADER include/spdk/idxd.h 00:04:28.061 TEST_HEADER include/spdk/idxd_spec.h 00:04:28.061 TEST_HEADER include/spdk/init.h 00:04:28.061 TEST_HEADER include/spdk/ioat.h 00:04:28.319 TEST_HEADER include/spdk/ioat_spec.h 00:04:28.319 TEST_HEADER include/spdk/iscsi_spec.h 00:04:28.319 TEST_HEADER include/spdk/json.h 00:04:28.319 TEST_HEADER include/spdk/jsonrpc.h 00:04:28.319 TEST_HEADER include/spdk/keyring.h 00:04:28.319 TEST_HEADER include/spdk/keyring_module.h 00:04:28.319 TEST_HEADER include/spdk/likely.h 00:04:28.319 TEST_HEADER include/spdk/log.h 00:04:28.319 TEST_HEADER include/spdk/lvol.h 00:04:28.319 TEST_HEADER include/spdk/memory.h 00:04:28.319 CC test/dma/test_dma/test_dma.o 00:04:28.319 TEST_HEADER include/spdk/mmio.h 00:04:28.319 TEST_HEADER include/spdk/nbd.h 00:04:28.319 TEST_HEADER include/spdk/notify.h 00:04:28.319 TEST_HEADER include/spdk/nvme.h 00:04:28.319 TEST_HEADER include/spdk/nvme_intel.h 00:04:28.319 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:28.319 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:28.319 TEST_HEADER include/spdk/nvme_spec.h 00:04:28.319 TEST_HEADER include/spdk/nvme_zns.h 00:04:28.319 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:28.319 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:28.319 TEST_HEADER include/spdk/nvmf.h 00:04:28.319 TEST_HEADER include/spdk/nvmf_spec.h 00:04:28.319 TEST_HEADER include/spdk/nvmf_transport.h 00:04:28.319 TEST_HEADER include/spdk/opal.h 00:04:28.319 TEST_HEADER include/spdk/opal_spec.h 00:04:28.319 TEST_HEADER include/spdk/pci_ids.h 00:04:28.319 TEST_HEADER include/spdk/pipe.h 00:04:28.319 TEST_HEADER include/spdk/queue.h 00:04:28.319 LINK nvmf 00:04:28.319 TEST_HEADER include/spdk/reduce.h 00:04:28.319 TEST_HEADER include/spdk/rpc.h 00:04:28.319 TEST_HEADER include/spdk/scheduler.h 00:04:28.319 TEST_HEADER include/spdk/scsi.h 00:04:28.319 TEST_HEADER include/spdk/scsi_spec.h 00:04:28.319 TEST_HEADER include/spdk/sock.h 00:04:28.319 LINK mkfs 00:04:28.319 TEST_HEADER include/spdk/stdinc.h 00:04:28.319 TEST_HEADER include/spdk/string.h 00:04:28.319 TEST_HEADER include/spdk/thread.h 00:04:28.319 TEST_HEADER include/spdk/trace.h 00:04:28.319 TEST_HEADER include/spdk/trace_parser.h 00:04:28.319 TEST_HEADER include/spdk/tree.h 00:04:28.319 TEST_HEADER include/spdk/ublk.h 00:04:28.319 TEST_HEADER include/spdk/util.h 00:04:28.319 TEST_HEADER include/spdk/uuid.h 00:04:28.319 TEST_HEADER include/spdk/version.h 00:04:28.319 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:28.319 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:28.319 TEST_HEADER include/spdk/vhost.h 00:04:28.319 TEST_HEADER include/spdk/vmd.h 00:04:28.319 TEST_HEADER include/spdk/xor.h 00:04:28.319 TEST_HEADER include/spdk/zipf.h 00:04:28.319 CXX test/cpp_headers/accel.o 00:04:28.319 LINK arbitration 00:04:28.319 CC test/event/event_perf/event_perf.o 00:04:28.319 LINK vhost_fuzz 00:04:28.319 CC test/env/mem_callbacks/mem_callbacks.o 00:04:28.592 CXX test/cpp_headers/accel_module.o 00:04:28.592 LINK event_perf 00:04:28.592 CC app/spdk_nvme_identify/identify.o 00:04:28.592 CC app/spdk_nvme_discover/discovery_aer.o 00:04:28.592 CC examples/nvme/hotplug/hotplug.o 00:04:28.592 LINK spdk_nvme_perf 00:04:28.592 LINK test_dma 00:04:28.874 CXX test/cpp_headers/assert.o 00:04:28.874 CC test/event/reactor/reactor.o 00:04:28.874 LINK spdk_nvme_discover 00:04:28.874 CC test/lvol/esnap/esnap.o 00:04:28.874 CXX test/cpp_headers/barrier.o 00:04:28.874 LINK hotplug 00:04:28.874 LINK reactor 00:04:28.874 CC test/rpc_client/rpc_client_test.o 00:04:28.874 CC test/nvme/aer/aer.o 00:04:29.133 LINK mem_callbacks 00:04:29.133 CXX test/cpp_headers/base64.o 00:04:29.133 LINK rpc_client_test 00:04:29.133 CC test/thread/poller_perf/poller_perf.o 00:04:29.133 CC test/event/reactor_perf/reactor_perf.o 00:04:29.133 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:29.133 CXX test/cpp_headers/bdev.o 00:04:29.133 CC test/env/vtophys/vtophys.o 00:04:29.391 LINK aer 00:04:29.391 CXX test/cpp_headers/bdev_module.o 00:04:29.391 LINK reactor_perf 00:04:29.391 LINK poller_perf 00:04:29.391 LINK cmb_copy 00:04:29.391 LINK iscsi_fuzz 00:04:29.391 LINK vtophys 00:04:29.391 LINK spdk_nvme_identify 00:04:29.391 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:29.391 CC test/nvme/reset/reset.o 00:04:29.649 CXX test/cpp_headers/bdev_zone.o 00:04:29.649 CC test/event/app_repeat/app_repeat.o 00:04:29.649 CC examples/nvme/abort/abort.o 00:04:29.649 CC test/event/scheduler/scheduler.o 00:04:29.649 LINK env_dpdk_post_init 00:04:29.649 CC app/spdk_top/spdk_top.o 00:04:29.649 CC examples/util/zipf/zipf.o 00:04:29.649 CC app/vhost/vhost.o 00:04:29.649 CXX test/cpp_headers/bit_array.o 00:04:29.649 LINK reset 00:04:29.907 LINK app_repeat 00:04:29.907 LINK scheduler 00:04:29.907 LINK zipf 00:04:29.907 CC test/env/memory/memory_ut.o 00:04:29.907 CXX test/cpp_headers/bit_pool.o 00:04:29.907 LINK vhost 00:04:29.908 CXX test/cpp_headers/blob_bdev.o 00:04:29.908 LINK abort 00:04:29.908 CC test/nvme/sgl/sgl.o 00:04:30.165 CXX test/cpp_headers/blobfs_bdev.o 00:04:30.165 CC test/env/pci/pci_ut.o 00:04:30.165 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:30.165 CC app/spdk_dd/spdk_dd.o 00:04:30.165 CC test/nvme/e2edp/nvme_dp.o 00:04:30.423 LINK sgl 00:04:30.423 CC app/fio/nvme/fio_plugin.o 00:04:30.423 CXX test/cpp_headers/blobfs.o 00:04:30.423 LINK pmr_persistence 00:04:30.423 CXX test/cpp_headers/blob.o 00:04:30.423 LINK pci_ut 00:04:30.423 CXX test/cpp_headers/conf.o 00:04:30.681 LINK spdk_top 00:04:30.681 LINK nvme_dp 00:04:30.681 LINK spdk_dd 00:04:30.681 CC examples/thread/thread/thread_ex.o 00:04:30.681 CXX test/cpp_headers/config.o 00:04:30.681 CXX test/cpp_headers/cpuset.o 00:04:30.940 CXX test/cpp_headers/crc16.o 00:04:30.940 CC app/fio/bdev/fio_plugin.o 00:04:30.940 CC test/nvme/overhead/overhead.o 00:04:30.940 CC test/nvme/err_injection/err_injection.o 00:04:30.940 CXX test/cpp_headers/crc32.o 00:04:30.940 LINK spdk_nvme 00:04:30.940 LINK thread 00:04:30.940 LINK memory_ut 00:04:30.940 LINK err_injection 00:04:31.198 CXX test/cpp_headers/crc64.o 00:04:31.198 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:31.198 LINK overhead 00:04:31.198 CXX test/cpp_headers/dif.o 00:04:31.198 CC examples/idxd/perf/perf.o 00:04:31.198 CXX test/cpp_headers/dma.o 00:04:31.198 CXX test/cpp_headers/endian.o 00:04:31.198 CXX test/cpp_headers/env_dpdk.o 00:04:31.198 LINK spdk_bdev 00:04:31.198 LINK interrupt_tgt 00:04:31.456 CC test/nvme/startup/startup.o 00:04:31.456 CC test/nvme/reserve/reserve.o 00:04:31.456 CXX test/cpp_headers/env.o 00:04:31.456 CXX test/cpp_headers/event.o 00:04:31.456 CXX test/cpp_headers/fd_group.o 00:04:31.456 CXX test/cpp_headers/fd.o 00:04:31.456 CXX test/cpp_headers/file.o 00:04:31.456 LINK idxd_perf 00:04:31.456 LINK startup 00:04:31.456 CXX test/cpp_headers/ftl.o 00:04:31.456 CXX test/cpp_headers/gpt_spec.o 00:04:31.714 LINK reserve 00:04:31.714 CXX test/cpp_headers/hexlify.o 00:04:31.714 CXX test/cpp_headers/histogram_data.o 00:04:31.714 CXX test/cpp_headers/idxd.o 00:04:31.714 CC test/nvme/simple_copy/simple_copy.o 00:04:31.714 CXX test/cpp_headers/idxd_spec.o 00:04:31.714 CXX test/cpp_headers/init.o 00:04:31.714 CC test/nvme/connect_stress/connect_stress.o 00:04:31.714 CXX test/cpp_headers/ioat.o 00:04:31.714 CXX test/cpp_headers/ioat_spec.o 00:04:31.714 CC test/nvme/boot_partition/boot_partition.o 00:04:31.977 CC test/nvme/compliance/nvme_compliance.o 00:04:31.977 CXX test/cpp_headers/iscsi_spec.o 00:04:31.977 CXX test/cpp_headers/json.o 00:04:31.977 CXX test/cpp_headers/jsonrpc.o 00:04:31.977 LINK connect_stress 00:04:31.977 LINK simple_copy 00:04:31.977 CXX test/cpp_headers/keyring.o 00:04:31.977 LINK boot_partition 00:04:32.236 CXX test/cpp_headers/keyring_module.o 00:04:32.236 CXX test/cpp_headers/likely.o 00:04:32.236 CXX test/cpp_headers/log.o 00:04:32.236 CXX test/cpp_headers/lvol.o 00:04:32.236 CXX test/cpp_headers/memory.o 00:04:32.236 LINK nvme_compliance 00:04:32.236 CXX test/cpp_headers/mmio.o 00:04:32.236 CC test/nvme/fused_ordering/fused_ordering.o 00:04:32.236 CXX test/cpp_headers/nbd.o 00:04:32.495 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:32.495 CXX test/cpp_headers/notify.o 00:04:32.495 CXX test/cpp_headers/nvme.o 00:04:32.495 CC test/nvme/cuse/cuse.o 00:04:32.495 CC test/nvme/fdp/fdp.o 00:04:32.495 CXX test/cpp_headers/nvme_intel.o 00:04:32.495 CXX test/cpp_headers/nvme_ocssd.o 00:04:32.495 LINK fused_ordering 00:04:32.495 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:32.752 LINK doorbell_aers 00:04:32.752 CXX test/cpp_headers/nvme_spec.o 00:04:32.752 CXX test/cpp_headers/nvme_zns.o 00:04:32.752 CXX test/cpp_headers/nvmf_cmd.o 00:04:32.752 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:32.752 CXX test/cpp_headers/nvmf.o 00:04:32.752 CXX test/cpp_headers/nvmf_spec.o 00:04:32.752 LINK fdp 00:04:33.010 CXX test/cpp_headers/nvmf_transport.o 00:04:33.010 CXX test/cpp_headers/opal.o 00:04:33.010 CXX test/cpp_headers/opal_spec.o 00:04:33.010 CXX test/cpp_headers/pci_ids.o 00:04:33.010 CXX test/cpp_headers/pipe.o 00:04:33.010 CXX test/cpp_headers/queue.o 00:04:33.010 CXX test/cpp_headers/reduce.o 00:04:33.010 CXX test/cpp_headers/rpc.o 00:04:33.010 CXX test/cpp_headers/scheduler.o 00:04:33.010 CXX test/cpp_headers/scsi.o 00:04:33.010 CXX test/cpp_headers/scsi_spec.o 00:04:33.268 CXX test/cpp_headers/sock.o 00:04:33.268 CXX test/cpp_headers/stdinc.o 00:04:33.268 CXX test/cpp_headers/string.o 00:04:33.268 CXX test/cpp_headers/thread.o 00:04:33.268 CXX test/cpp_headers/trace.o 00:04:33.268 CXX test/cpp_headers/trace_parser.o 00:04:33.268 CXX test/cpp_headers/tree.o 00:04:33.268 CXX test/cpp_headers/util.o 00:04:33.268 CXX test/cpp_headers/ublk.o 00:04:33.268 CXX test/cpp_headers/uuid.o 00:04:33.268 CXX test/cpp_headers/version.o 00:04:33.526 CXX test/cpp_headers/vfio_user_pci.o 00:04:33.526 CXX test/cpp_headers/vfio_user_spec.o 00:04:33.526 CXX test/cpp_headers/vhost.o 00:04:33.526 CXX test/cpp_headers/vmd.o 00:04:33.526 CXX test/cpp_headers/xor.o 00:04:33.526 CXX test/cpp_headers/zipf.o 00:04:33.784 LINK cuse 00:04:33.784 LINK esnap 00:04:37.967 00:04:37.967 real 0m58.613s 00:04:37.967 user 5m15.463s 00:04:37.967 sys 1m13.105s 00:04:37.967 20:04:26 make -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:04:37.967 20:04:26 make -- common/autotest_common.sh@10 -- $ set +x 00:04:37.967 ************************************ 00:04:37.967 END TEST make 00:04:37.967 ************************************ 00:04:37.967 20:04:26 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:37.967 20:04:26 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:37.967 20:04:26 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:37.967 20:04:26 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:37.967 20:04:26 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:37.967 20:04:26 -- pm/common@44 -- $ pid=5937 00:04:37.967 20:04:26 -- pm/common@50 -- $ kill -TERM 5937 00:04:37.967 20:04:26 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:37.967 20:04:26 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:37.967 20:04:26 -- pm/common@44 -- $ pid=5939 00:04:37.967 20:04:26 -- pm/common@50 -- $ kill -TERM 5939 00:04:37.967 20:04:26 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:37.967 20:04:26 -- nvmf/common.sh@7 -- # uname -s 00:04:37.967 20:04:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:37.967 20:04:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:37.967 20:04:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:37.967 20:04:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:37.967 20:04:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:37.967 20:04:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:37.967 20:04:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:37.967 20:04:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:37.967 20:04:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:37.967 20:04:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:37.967 20:04:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:04:37.967 20:04:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:04:37.967 20:04:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:37.967 20:04:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:37.967 20:04:26 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:04:37.967 20:04:26 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:37.967 20:04:26 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:37.967 20:04:26 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:37.967 20:04:26 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:37.967 20:04:26 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:37.968 20:04:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:37.968 20:04:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:37.968 20:04:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:37.968 20:04:26 -- paths/export.sh@5 -- # export PATH 00:04:37.968 20:04:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:37.968 20:04:26 -- nvmf/common.sh@47 -- # : 0 00:04:37.968 20:04:26 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:37.968 20:04:26 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:37.968 20:04:26 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:37.968 20:04:26 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:37.968 20:04:26 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:37.968 20:04:26 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:37.968 20:04:26 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:37.968 20:04:26 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:37.968 20:04:26 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:37.968 20:04:26 -- spdk/autotest.sh@32 -- # uname -s 00:04:37.968 20:04:26 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:37.968 20:04:26 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:37.968 20:04:26 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:37.968 20:04:26 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:37.968 20:04:26 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:37.968 20:04:26 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:37.968 20:04:26 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:37.968 20:04:26 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:37.968 20:04:26 -- spdk/autotest.sh@48 -- # udevadm_pid=67037 00:04:37.968 20:04:26 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:37.968 20:04:26 -- pm/common@17 -- # local monitor 00:04:37.968 20:04:26 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:37.968 20:04:26 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:37.968 20:04:26 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:37.968 20:04:26 -- pm/common@21 -- # date +%s 00:04:37.968 20:04:26 -- pm/common@21 -- # date +%s 00:04:37.968 20:04:26 -- pm/common@25 -- # sleep 1 00:04:37.968 20:04:26 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1720987466 00:04:37.968 20:04:26 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1720987466 00:04:37.968 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1720987466_collect-vmstat.pm.log 00:04:37.968 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1720987466_collect-cpu-load.pm.log 00:04:38.901 20:04:27 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:38.901 20:04:27 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:38.901 20:04:27 -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:38.901 20:04:27 -- common/autotest_common.sh@10 -- # set +x 00:04:38.901 20:04:27 -- spdk/autotest.sh@59 -- # create_test_list 00:04:38.901 20:04:27 -- common/autotest_common.sh@744 -- # xtrace_disable 00:04:38.901 20:04:27 -- common/autotest_common.sh@10 -- # set +x 00:04:38.901 20:04:27 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:38.901 20:04:27 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:38.901 20:04:27 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:38.901 20:04:27 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:38.901 20:04:27 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:38.901 20:04:27 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:38.901 20:04:27 -- common/autotest_common.sh@1451 -- # uname 00:04:38.901 20:04:27 -- common/autotest_common.sh@1451 -- # '[' Linux = FreeBSD ']' 00:04:38.901 20:04:27 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:38.901 20:04:27 -- common/autotest_common.sh@1471 -- # uname 00:04:38.901 20:04:27 -- common/autotest_common.sh@1471 -- # [[ Linux = FreeBSD ]] 00:04:38.901 20:04:27 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:04:38.901 20:04:27 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:04:38.901 20:04:27 -- spdk/autotest.sh@72 -- # hash lcov 00:04:38.901 20:04:27 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:38.901 20:04:27 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:04:38.901 --rc lcov_branch_coverage=1 00:04:38.901 --rc lcov_function_coverage=1 00:04:38.901 --rc genhtml_branch_coverage=1 00:04:38.901 --rc genhtml_function_coverage=1 00:04:38.901 --rc genhtml_legend=1 00:04:38.901 --rc geninfo_all_blocks=1 00:04:38.901 ' 00:04:38.901 20:04:27 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:04:38.901 --rc lcov_branch_coverage=1 00:04:38.901 --rc lcov_function_coverage=1 00:04:38.901 --rc genhtml_branch_coverage=1 00:04:38.901 --rc genhtml_function_coverage=1 00:04:38.901 --rc genhtml_legend=1 00:04:38.901 --rc geninfo_all_blocks=1 00:04:38.901 ' 00:04:38.901 20:04:27 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:04:38.901 --rc lcov_branch_coverage=1 00:04:38.901 --rc lcov_function_coverage=1 00:04:38.901 --rc genhtml_branch_coverage=1 00:04:38.901 --rc genhtml_function_coverage=1 00:04:38.901 --rc genhtml_legend=1 00:04:38.901 --rc geninfo_all_blocks=1 00:04:38.901 --no-external' 00:04:38.902 20:04:27 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:04:38.902 --rc lcov_branch_coverage=1 00:04:38.902 --rc lcov_function_coverage=1 00:04:38.902 --rc genhtml_branch_coverage=1 00:04:38.902 --rc genhtml_function_coverage=1 00:04:38.902 --rc genhtml_legend=1 00:04:38.902 --rc geninfo_all_blocks=1 00:04:38.902 --no-external' 00:04:38.902 20:04:27 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:04:38.902 lcov: LCOV version 1.14 00:04:38.902 20:04:27 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:53.793 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:53.793 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:03.819 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:05:03.819 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:05:03.819 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:05:03.819 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:05:03.819 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:05:03.819 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:05:03.819 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:05:03.819 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:05:03.819 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:05:03.819 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:05:03.819 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:05:03.819 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:05:03.819 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:05:03.819 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:05:03.819 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:05:03.819 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:05:03.819 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:05:03.819 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:05:03.819 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:05:03.819 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:05:03.819 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:05:03.819 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:05:03.819 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:05:03.819 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:05:03.819 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:05:03.819 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:05:03.819 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:05:03.819 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:05:03.819 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:05:03.819 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:05:03.819 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:05:03.819 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:05:03.819 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:05:03.819 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:05:03.819 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:05:03.819 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:05:03.819 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:05:03.819 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:05:03.819 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:05:03.819 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:05:03.819 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:05:03.819 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:05:03.819 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:05:03.819 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:05:03.819 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:05:03.819 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:05:03.819 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:05:03.819 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:05:03.819 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:05:03.819 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:05:03.819 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:05:03.819 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:05:03.819 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:05:03.819 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:05:03.819 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:05:03.819 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:05:03.819 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:05:03.819 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:05:03.819 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:05:03.819 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:05:03.819 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:05:03.819 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:05:03.819 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:05:03.819 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:05:03.819 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:05:03.819 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:05:03.819 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:05:03.819 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:05:03.819 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:05:03.819 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:05:03.819 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:05:03.819 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:05:03.819 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:05:03.819 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:05:03.819 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:05:03.819 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:05:03.819 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:05:03.819 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:05:03.819 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:05:03.819 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:05:03.819 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:05:03.819 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:05:03.820 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:05:03.820 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:05:03.820 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:05:03.820 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:05:03.820 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:05:03.820 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:05:03.820 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:05:03.820 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:05:03.820 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:05:03.820 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:05:03.820 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:05:03.820 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:05:03.820 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:05:03.820 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:05:03.820 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:05:03.820 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:05:03.820 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:05:03.820 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:05:03.820 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:05:03.820 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:05:03.820 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:05:03.820 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:05:03.820 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:05:03.820 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:05:03.820 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:05:03.820 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:05:03.820 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:05:03.820 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:05:03.820 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:05:03.820 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:05:03.820 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:05:03.820 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:05:03.820 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:05:03.820 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:05:03.820 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:05:03.820 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:05:03.820 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:05:03.820 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:05:03.820 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:05:03.820 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:05:03.820 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:05:03.820 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:05:03.820 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:05:03.820 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:05:03.820 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:05:03.820 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:05:03.820 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:05:03.820 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:05:03.820 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:05:03.820 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:05:03.820 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:05:03.820 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:05:03.820 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:05:03.820 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:05:03.820 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:05:03.820 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:05:03.820 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:05:03.820 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:05:03.820 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:05:03.820 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:05:03.820 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:05:03.820 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:05:03.820 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:05:03.820 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:05:03.820 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:05:03.820 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:05:03.820 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:05:03.820 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:05:03.820 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:05:03.820 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:05:03.820 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:05:03.820 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:05:03.820 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:05:03.820 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:05:03.820 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:05:03.820 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:05:03.820 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:05:03.820 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:05:03.820 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:05:03.820 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:05:03.820 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:05:03.820 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:05:03.820 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:05:03.820 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:05:03.820 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:05:03.820 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:05:03.820 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:05:03.820 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:05:03.820 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:05:03.820 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:05:03.820 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:05:03.820 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:05:03.820 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:05:03.820 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:05:06.350 20:04:55 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:05:06.350 20:04:55 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:06.350 20:04:55 -- common/autotest_common.sh@10 -- # set +x 00:05:06.350 20:04:55 -- spdk/autotest.sh@91 -- # rm -f 00:05:06.350 20:04:55 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:06.918 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:06.918 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:06.918 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:06.918 20:04:55 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:05:06.918 20:04:55 -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:05:06.918 20:04:55 -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:05:06.918 20:04:55 -- common/autotest_common.sh@1666 -- # local nvme bdf 00:05:06.918 20:04:55 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:06.918 20:04:55 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:05:06.918 20:04:55 -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:05:06.918 20:04:55 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:06.918 20:04:55 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:06.918 20:04:55 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:06.918 20:04:55 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n1 00:05:06.918 20:04:55 -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:05:06.918 20:04:55 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:06.918 20:04:55 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:06.918 20:04:55 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:06.918 20:04:55 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n2 00:05:06.918 20:04:55 -- common/autotest_common.sh@1658 -- # local device=nvme1n2 00:05:06.918 20:04:55 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:06.918 20:04:55 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:06.918 20:04:55 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:06.918 20:04:55 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n3 00:05:06.918 20:04:55 -- common/autotest_common.sh@1658 -- # local device=nvme1n3 00:05:06.918 20:04:55 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:06.918 20:04:55 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:06.918 20:04:55 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:05:06.918 20:04:55 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:06.918 20:04:55 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:06.918 20:04:55 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:05:06.918 20:04:55 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:05:06.918 20:04:55 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:07.177 No valid GPT data, bailing 00:05:07.177 20:04:56 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:07.177 20:04:56 -- scripts/common.sh@391 -- # pt= 00:05:07.177 20:04:56 -- scripts/common.sh@392 -- # return 1 00:05:07.177 20:04:56 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:07.177 1+0 records in 00:05:07.177 1+0 records out 00:05:07.177 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0052182 s, 201 MB/s 00:05:07.177 20:04:56 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:07.177 20:04:56 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:07.177 20:04:56 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:05:07.177 20:04:56 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:05:07.177 20:04:56 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:07.177 No valid GPT data, bailing 00:05:07.177 20:04:56 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:07.177 20:04:56 -- scripts/common.sh@391 -- # pt= 00:05:07.177 20:04:56 -- scripts/common.sh@392 -- # return 1 00:05:07.177 20:04:56 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:07.177 1+0 records in 00:05:07.177 1+0 records out 00:05:07.177 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00508584 s, 206 MB/s 00:05:07.177 20:04:56 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:07.177 20:04:56 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:07.177 20:04:56 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:05:07.177 20:04:56 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:05:07.177 20:04:56 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:05:07.177 No valid GPT data, bailing 00:05:07.177 20:04:56 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:07.177 20:04:56 -- scripts/common.sh@391 -- # pt= 00:05:07.177 20:04:56 -- scripts/common.sh@392 -- # return 1 00:05:07.177 20:04:56 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:05:07.177 1+0 records in 00:05:07.177 1+0 records out 00:05:07.177 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00429606 s, 244 MB/s 00:05:07.177 20:04:56 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:07.177 20:04:56 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:07.177 20:04:56 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:05:07.177 20:04:56 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:05:07.177 20:04:56 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:05:07.436 No valid GPT data, bailing 00:05:07.436 20:04:56 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:07.436 20:04:56 -- scripts/common.sh@391 -- # pt= 00:05:07.436 20:04:56 -- scripts/common.sh@392 -- # return 1 00:05:07.436 20:04:56 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:05:07.436 1+0 records in 00:05:07.436 1+0 records out 00:05:07.436 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00444283 s, 236 MB/s 00:05:07.436 20:04:56 -- spdk/autotest.sh@118 -- # sync 00:05:07.436 20:04:56 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:07.436 20:04:56 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:07.436 20:04:56 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:09.340 20:04:58 -- spdk/autotest.sh@124 -- # uname -s 00:05:09.340 20:04:58 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:05:09.340 20:04:58 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:09.340 20:04:58 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:09.340 20:04:58 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:09.340 20:04:58 -- common/autotest_common.sh@10 -- # set +x 00:05:09.340 ************************************ 00:05:09.340 START TEST setup.sh 00:05:09.340 ************************************ 00:05:09.340 20:04:58 setup.sh -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:09.340 * Looking for test storage... 00:05:09.340 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:09.340 20:04:58 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:05:09.340 20:04:58 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:05:09.340 20:04:58 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:09.340 20:04:58 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:09.340 20:04:58 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:09.340 20:04:58 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:09.340 ************************************ 00:05:09.340 START TEST acl 00:05:09.340 ************************************ 00:05:09.340 20:04:58 setup.sh.acl -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:09.599 * Looking for test storage... 00:05:09.599 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:09.599 20:04:58 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:05:09.599 20:04:58 setup.sh.acl -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:05:09.599 20:04:58 setup.sh.acl -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:05:09.599 20:04:58 setup.sh.acl -- common/autotest_common.sh@1666 -- # local nvme bdf 00:05:09.599 20:04:58 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:09.599 20:04:58 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:05:09.599 20:04:58 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:05:09.599 20:04:58 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:09.599 20:04:58 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:09.599 20:04:58 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:09.599 20:04:58 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n1 00:05:09.599 20:04:58 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:05:09.599 20:04:58 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:09.599 20:04:58 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:09.599 20:04:58 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:09.599 20:04:58 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n2 00:05:09.599 20:04:58 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme1n2 00:05:09.599 20:04:58 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:09.599 20:04:58 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:09.599 20:04:58 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:09.599 20:04:58 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n3 00:05:09.599 20:04:58 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme1n3 00:05:09.599 20:04:58 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:09.599 20:04:58 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:09.599 20:04:58 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:05:09.599 20:04:58 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:05:09.599 20:04:58 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:05:09.599 20:04:58 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:05:09.599 20:04:58 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:05:09.599 20:04:58 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:09.599 20:04:58 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:10.166 20:04:59 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:05:10.166 20:04:59 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:05:10.166 20:04:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:10.166 20:04:59 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:05:10.166 20:04:59 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:05:10.166 20:04:59 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:11.102 20:04:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:05:11.102 20:04:59 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:11.102 20:04:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:11.102 Hugepages 00:05:11.102 node hugesize free / total 00:05:11.102 20:04:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:11.102 20:04:59 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:11.102 20:04:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:11.102 00:05:11.102 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:11.102 20:04:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:11.102 20:04:59 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:11.102 20:04:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:11.102 20:04:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:05:11.102 20:04:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:05:11.102 20:04:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:11.102 20:04:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:11.102 20:05:00 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:05:11.102 20:05:00 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:11.102 20:05:00 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:05:11.102 20:05:00 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:11.102 20:05:00 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:11.102 20:05:00 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:11.102 20:05:00 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:05:11.102 20:05:00 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:11.102 20:05:00 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:05:11.102 20:05:00 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:11.102 20:05:00 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:11.102 20:05:00 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:11.102 20:05:00 setup.sh.acl -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:05:11.102 20:05:00 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:05:11.102 20:05:00 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:11.102 20:05:00 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:11.102 20:05:00 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:11.102 ************************************ 00:05:11.102 START TEST denied 00:05:11.102 ************************************ 00:05:11.102 20:05:00 setup.sh.acl.denied -- common/autotest_common.sh@1121 -- # denied 00:05:11.102 20:05:00 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:05:11.102 20:05:00 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:05:11.102 20:05:00 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:05:11.102 20:05:00 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:05:11.102 20:05:00 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:12.069 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:05:12.069 20:05:01 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:05:12.069 20:05:01 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:05:12.069 20:05:01 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:05:12.069 20:05:01 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:05:12.069 20:05:01 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:05:12.069 20:05:01 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:12.069 20:05:01 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:12.069 20:05:01 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:05:12.069 20:05:01 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:12.069 20:05:01 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:12.634 00:05:12.634 real 0m1.439s 00:05:12.634 user 0m0.541s 00:05:12.634 sys 0m0.842s 00:05:12.634 20:05:01 setup.sh.acl.denied -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:12.634 20:05:01 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:05:12.634 ************************************ 00:05:12.634 END TEST denied 00:05:12.634 ************************************ 00:05:12.634 20:05:01 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:05:12.634 20:05:01 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:12.634 20:05:01 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:12.634 20:05:01 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:12.634 ************************************ 00:05:12.634 START TEST allowed 00:05:12.634 ************************************ 00:05:12.634 20:05:01 setup.sh.acl.allowed -- common/autotest_common.sh@1121 -- # allowed 00:05:12.634 20:05:01 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:05:12.634 20:05:01 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:05:12.634 20:05:01 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:05:12.634 20:05:01 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:05:12.634 20:05:01 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:13.566 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:13.566 20:05:02 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:05:13.566 20:05:02 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:05:13.566 20:05:02 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:05:13.566 20:05:02 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:05:13.566 20:05:02 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:05:13.566 20:05:02 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:13.566 20:05:02 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:13.566 20:05:02 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:05:13.566 20:05:02 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:13.567 20:05:02 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:14.132 00:05:14.132 real 0m1.578s 00:05:14.132 user 0m0.703s 00:05:14.132 sys 0m0.864s 00:05:14.132 20:05:03 setup.sh.acl.allowed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:14.132 20:05:03 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:05:14.132 ************************************ 00:05:14.132 END TEST allowed 00:05:14.132 ************************************ 00:05:14.390 00:05:14.390 real 0m4.859s 00:05:14.390 user 0m2.120s 00:05:14.390 sys 0m2.679s 00:05:14.390 20:05:03 setup.sh.acl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:14.390 20:05:03 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:14.390 ************************************ 00:05:14.390 END TEST acl 00:05:14.390 ************************************ 00:05:14.390 20:05:03 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:14.390 20:05:03 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:14.390 20:05:03 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:14.390 20:05:03 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:14.390 ************************************ 00:05:14.390 START TEST hugepages 00:05:14.390 ************************************ 00:05:14.390 20:05:03 setup.sh.hugepages -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:14.390 * Looking for test storage... 00:05:14.390 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:14.390 20:05:03 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:05:14.390 20:05:03 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:05:14.390 20:05:03 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:05:14.390 20:05:03 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:05:14.390 20:05:03 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:05:14.390 20:05:03 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:05:14.390 20:05:03 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:05:14.390 20:05:03 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:05:14.390 20:05:03 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:05:14.390 20:05:03 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:05:14.390 20:05:03 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:14.390 20:05:03 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:14.390 20:05:03 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:14.390 20:05:03 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:05:14.390 20:05:03 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:14.390 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 4449076 kB' 'MemAvailable: 7384208 kB' 'Buffers: 2436 kB' 'Cached: 3136260 kB' 'SwapCached: 0 kB' 'Active: 476892 kB' 'Inactive: 2765980 kB' 'Active(anon): 114668 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2765980 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 304 kB' 'Writeback: 0 kB' 'AnonPages: 105836 kB' 'Mapped: 48700 kB' 'Shmem: 10492 kB' 'KReclaimable: 88180 kB' 'Slab: 168624 kB' 'SReclaimable: 88180 kB' 'SUnreclaim: 80444 kB' 'KernelStack: 6572 kB' 'PageTables: 4072 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412436 kB' 'Committed_AS: 345348 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54948 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:05:14.391 20:05:03 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:05:14.392 20:05:03 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:05:14.392 20:05:03 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:14.392 20:05:03 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:05:14.392 20:05:03 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:14.392 20:05:03 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:14.392 20:05:03 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:05:14.392 20:05:03 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:14.392 20:05:03 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:14.392 20:05:03 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:14.392 20:05:03 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:14.392 20:05:03 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:14.392 20:05:03 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:14.392 20:05:03 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:14.392 20:05:03 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:14.392 20:05:03 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:05:14.392 20:05:03 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:14.392 20:05:03 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:14.392 20:05:03 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:14.392 ************************************ 00:05:14.392 START TEST default_setup 00:05:14.392 ************************************ 00:05:14.392 20:05:03 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1121 -- # default_setup 00:05:14.392 20:05:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:05:14.392 20:05:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:05:14.392 20:05:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:14.392 20:05:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:05:14.392 20:05:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:14.392 20:05:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:05:14.392 20:05:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:14.392 20:05:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:14.392 20:05:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:14.392 20:05:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:14.392 20:05:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:05:14.392 20:05:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:14.392 20:05:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:14.392 20:05:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:14.392 20:05:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:14.392 20:05:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:14.392 20:05:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:14.392 20:05:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:14.392 20:05:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:05:14.392 20:05:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:05:14.392 20:05:03 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:05:14.392 20:05:03 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:15.326 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:15.326 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:15.326 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:15.326 20:05:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:05:15.326 20:05:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:05:15.326 20:05:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:05:15.326 20:05:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:05:15.326 20:05:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:05:15.326 20:05:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:05:15.326 20:05:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:05:15.326 20:05:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:15.326 20:05:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:15.326 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:15.326 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:15.326 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:15.326 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:15.326 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:15.326 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:15.326 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:15.326 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:15.326 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:15.326 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.326 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6566748 kB' 'MemAvailable: 9501736 kB' 'Buffers: 2436 kB' 'Cached: 3136252 kB' 'SwapCached: 0 kB' 'Active: 493344 kB' 'Inactive: 2765984 kB' 'Active(anon): 131120 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2765984 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 122252 kB' 'Mapped: 48732 kB' 'Shmem: 10468 kB' 'KReclaimable: 87888 kB' 'Slab: 168320 kB' 'SReclaimable: 87888 kB' 'SUnreclaim: 80432 kB' 'KernelStack: 6560 kB' 'PageTables: 4164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 362224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54980 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:05:15.326 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.326 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.326 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.327 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6566496 kB' 'MemAvailable: 9501484 kB' 'Buffers: 2436 kB' 'Cached: 3136252 kB' 'SwapCached: 0 kB' 'Active: 493324 kB' 'Inactive: 2765984 kB' 'Active(anon): 131100 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2765984 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 122248 kB' 'Mapped: 48604 kB' 'Shmem: 10468 kB' 'KReclaimable: 87888 kB' 'Slab: 168324 kB' 'SReclaimable: 87888 kB' 'SUnreclaim: 80436 kB' 'KernelStack: 6592 kB' 'PageTables: 4248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 362224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54964 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.328 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.329 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:15.330 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:15.330 20:05:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:05:15.330 20:05:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:15.330 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:15.330 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:15.330 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:15.330 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:15.330 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:15.330 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:15.330 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:15.330 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:15.330 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:15.330 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.330 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.330 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6566496 kB' 'MemAvailable: 9501484 kB' 'Buffers: 2436 kB' 'Cached: 3136252 kB' 'SwapCached: 0 kB' 'Active: 493224 kB' 'Inactive: 2765984 kB' 'Active(anon): 131000 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2765984 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 122144 kB' 'Mapped: 48604 kB' 'Shmem: 10468 kB' 'KReclaimable: 87888 kB' 'Slab: 168324 kB' 'SReclaimable: 87888 kB' 'SUnreclaim: 80436 kB' 'KernelStack: 6576 kB' 'PageTables: 4196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 362224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54964 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:05:15.330 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.330 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.330 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.330 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.330 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.330 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.330 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.330 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.330 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.330 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.330 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.330 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.330 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.330 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.330 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.330 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.330 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.330 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.330 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.330 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.330 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.330 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.330 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.330 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.330 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.330 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.330 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.330 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.330 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.330 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.330 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.330 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.330 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.330 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.330 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.330 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.330 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.330 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.330 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.330 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.330 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.330 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.330 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.330 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.330 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.330 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.330 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.330 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.330 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.330 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.330 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.330 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.330 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.330 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.330 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.330 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.330 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.330 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.330 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.330 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.330 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.330 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.330 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.330 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.330 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.330 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.330 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.330 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.330 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.330 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.330 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.330 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.330 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.330 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.330 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.330 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.330 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.330 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.330 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.330 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.330 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.330 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.330 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.330 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.330 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.330 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.330 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.331 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.331 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.331 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.331 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.331 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.331 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.331 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.331 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.331 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.331 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.331 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.331 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.331 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.331 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.331 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.331 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.331 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.331 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.331 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.331 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.331 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.331 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.331 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.331 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.331 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.331 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.331 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.331 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.331 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.331 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.331 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.331 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.331 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.331 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.331 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.331 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.331 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.331 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.331 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.331 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.331 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.331 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.331 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.331 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.331 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.331 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.331 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.331 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.331 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.331 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.331 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.331 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.331 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.331 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.331 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.331 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.331 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.331 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.331 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.331 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.331 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.331 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.331 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.331 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.331 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.331 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.331 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.331 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.331 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.331 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.331 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.331 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.331 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.331 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.331 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.331 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.331 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.331 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.331 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.331 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.331 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.331 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.590 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.590 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.590 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.590 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.590 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.590 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.590 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.590 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.590 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.590 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.590 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.590 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.590 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.590 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.590 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.590 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.590 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.590 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.590 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.590 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.590 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.590 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.590 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.590 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.590 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.590 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.590 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.590 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.590 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.590 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.590 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.590 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.590 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:15.590 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:15.590 20:05:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:05:15.590 nr_hugepages=1024 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:15.591 resv_hugepages=0 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:15.591 surplus_hugepages=0 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:15.591 anon_hugepages=0 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6566496 kB' 'MemAvailable: 9501484 kB' 'Buffers: 2436 kB' 'Cached: 3136252 kB' 'SwapCached: 0 kB' 'Active: 493224 kB' 'Inactive: 2765984 kB' 'Active(anon): 131000 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2765984 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 122144 kB' 'Mapped: 48604 kB' 'Shmem: 10468 kB' 'KReclaimable: 87888 kB' 'Slab: 168324 kB' 'SReclaimable: 87888 kB' 'SUnreclaim: 80436 kB' 'KernelStack: 6576 kB' 'PageTables: 4196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 362224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54964 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.591 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6566496 kB' 'MemUsed: 5675476 kB' 'SwapCached: 0 kB' 'Active: 493484 kB' 'Inactive: 2765984 kB' 'Active(anon): 131260 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2765984 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'FilePages: 3138688 kB' 'Mapped: 48604 kB' 'AnonPages: 122404 kB' 'Shmem: 10468 kB' 'KernelStack: 6576 kB' 'PageTables: 4196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 87888 kB' 'Slab: 168324 kB' 'SReclaimable: 87888 kB' 'SUnreclaim: 80436 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.592 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.593 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.594 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.594 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.594 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.594 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.594 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.594 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.594 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.594 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.594 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.594 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.594 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.594 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:15.594 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:15.594 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:15.594 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.594 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:15.594 20:05:04 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:15.594 20:05:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:15.594 20:05:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:15.594 20:05:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:15.594 20:05:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:15.594 node0=1024 expecting 1024 00:05:15.594 20:05:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:15.594 20:05:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:15.594 00:05:15.594 real 0m0.998s 00:05:15.594 user 0m0.470s 00:05:15.594 sys 0m0.462s 00:05:15.594 20:05:04 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:15.594 20:05:04 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:05:15.594 ************************************ 00:05:15.594 END TEST default_setup 00:05:15.594 ************************************ 00:05:15.594 20:05:04 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:05:15.594 20:05:04 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:15.594 20:05:04 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:15.594 20:05:04 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:15.594 ************************************ 00:05:15.594 START TEST per_node_1G_alloc 00:05:15.594 ************************************ 00:05:15.594 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1121 -- # per_node_1G_alloc 00:05:15.594 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:05:15.594 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:05:15.594 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:15.594 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:15.594 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:05:15.594 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:15.594 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:15.594 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:15.594 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:15.594 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:15.594 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:15.594 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:15.594 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:15.594 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:15.594 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:15.594 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:15.594 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:15.594 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:15.594 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:15.594 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:15.594 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:05:15.594 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:05:15.594 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:05:15.594 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:15.594 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:15.852 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:15.852 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:15.852 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:15.852 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:05:15.852 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:05:15.852 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:15.852 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:15.852 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:15.852 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:15.852 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:15.852 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:15.852 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:15.852 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:15.852 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:15.852 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:15.852 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:15.852 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:15.852 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:15.852 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:15.852 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:15.852 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:15.852 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:15.852 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.852 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.852 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7614536 kB' 'MemAvailable: 10549536 kB' 'Buffers: 2436 kB' 'Cached: 3136252 kB' 'SwapCached: 0 kB' 'Active: 493564 kB' 'Inactive: 2765996 kB' 'Active(anon): 131340 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2765996 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 122456 kB' 'Mapped: 48712 kB' 'Shmem: 10468 kB' 'KReclaimable: 87888 kB' 'Slab: 168312 kB' 'SReclaimable: 87888 kB' 'SUnreclaim: 80424 kB' 'KernelStack: 6560 kB' 'PageTables: 4128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 362224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54964 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:05:15.852 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.852 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:15.852 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.852 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.852 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.852 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:15.852 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.852 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.852 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.852 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:15.852 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.852 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.852 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.852 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:15.852 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.852 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.852 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.852 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:15.852 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.852 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.852 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.852 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:15.852 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.852 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.852 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.852 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:15.852 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.852 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.852 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.852 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:15.852 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.853 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7614992 kB' 'MemAvailable: 10549992 kB' 'Buffers: 2436 kB' 'Cached: 3136252 kB' 'SwapCached: 0 kB' 'Active: 493236 kB' 'Inactive: 2765996 kB' 'Active(anon): 131012 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2765996 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 122172 kB' 'Mapped: 48704 kB' 'Shmem: 10468 kB' 'KReclaimable: 87888 kB' 'Slab: 168312 kB' 'SReclaimable: 87888 kB' 'SUnreclaim: 80424 kB' 'KernelStack: 6576 kB' 'PageTables: 4216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 362224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54932 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.114 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.115 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7614992 kB' 'MemAvailable: 10549992 kB' 'Buffers: 2436 kB' 'Cached: 3136252 kB' 'SwapCached: 0 kB' 'Active: 493140 kB' 'Inactive: 2765996 kB' 'Active(anon): 130916 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2765996 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 122088 kB' 'Mapped: 48704 kB' 'Shmem: 10468 kB' 'KReclaimable: 87888 kB' 'Slab: 168312 kB' 'SReclaimable: 87888 kB' 'SUnreclaim: 80424 kB' 'KernelStack: 6544 kB' 'PageTables: 4116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 362228 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54932 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.116 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:16.117 nr_hugepages=512 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:16.117 resv_hugepages=0 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:16.117 surplus_hugepages=0 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:16.117 anon_hugepages=0 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:16.117 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:16.118 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:16.118 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:16.118 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:16.118 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:16.118 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:16.118 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:16.118 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.118 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.118 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7615512 kB' 'MemAvailable: 10550512 kB' 'Buffers: 2436 kB' 'Cached: 3136252 kB' 'SwapCached: 0 kB' 'Active: 493240 kB' 'Inactive: 2765996 kB' 'Active(anon): 131016 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2765996 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 122192 kB' 'Mapped: 48904 kB' 'Shmem: 10468 kB' 'KReclaimable: 87888 kB' 'Slab: 168312 kB' 'SReclaimable: 87888 kB' 'SUnreclaim: 80424 kB' 'KernelStack: 6544 kB' 'PageTables: 4116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 362224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:05:16.118 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.118 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.118 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.118 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.118 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.118 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.118 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.118 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.118 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.118 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.118 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.118 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.118 20:05:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.118 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.118 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.118 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.118 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.118 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.118 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.118 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.118 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.118 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.118 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.118 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.118 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.118 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.118 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.118 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.118 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.118 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.118 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.118 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.118 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.118 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.118 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.118 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.118 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.118 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.118 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.118 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.118 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.118 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.118 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.118 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.118 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.118 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.118 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.118 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.118 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.118 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.118 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.118 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.118 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.118 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.118 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.118 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.118 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.118 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.118 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.118 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.118 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.118 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.118 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.118 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.118 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.118 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.118 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.118 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.118 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.118 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.118 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.118 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.118 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.118 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.118 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.118 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.118 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.118 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.118 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.118 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.118 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:16.119 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7615588 kB' 'MemUsed: 4626384 kB' 'SwapCached: 0 kB' 'Active: 493192 kB' 'Inactive: 2765996 kB' 'Active(anon): 130968 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2765996 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'FilePages: 3138688 kB' 'Mapped: 48704 kB' 'AnonPages: 122132 kB' 'Shmem: 10468 kB' 'KernelStack: 6544 kB' 'PageTables: 4108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 87888 kB' 'Slab: 168308 kB' 'SReclaimable: 87888 kB' 'SUnreclaim: 80420 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.120 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.121 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.121 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.121 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.121 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.121 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.121 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.121 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.121 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.121 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.121 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.121 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.121 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.121 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.121 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.121 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.121 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.121 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.121 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.121 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.121 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.121 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.121 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.121 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.121 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.121 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.121 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.121 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.121 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.121 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.121 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.121 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.121 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.121 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.121 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.121 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.121 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.121 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:16.121 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.121 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.121 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.121 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:16.121 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:16.121 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:16.121 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:16.121 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:16.121 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:16.121 node0=512 expecting 512 00:05:16.121 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:16.121 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:16.121 00:05:16.121 real 0m0.528s 00:05:16.121 user 0m0.273s 00:05:16.121 sys 0m0.288s 00:05:16.121 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:16.121 20:05:05 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:16.121 ************************************ 00:05:16.121 END TEST per_node_1G_alloc 00:05:16.121 ************************************ 00:05:16.121 20:05:05 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:05:16.121 20:05:05 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:16.121 20:05:05 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:16.121 20:05:05 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:16.121 ************************************ 00:05:16.121 START TEST even_2G_alloc 00:05:16.121 ************************************ 00:05:16.121 20:05:05 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1121 -- # even_2G_alloc 00:05:16.121 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:05:16.121 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:16.121 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:16.121 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:16.121 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:16.121 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:16.121 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:16.121 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:16.121 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:16.121 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:16.121 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:16.121 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:16.121 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:16.121 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:16.121 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:16.121 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:05:16.121 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:16.121 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:16.121 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:16.121 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:05:16.121 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:05:16.121 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:05:16.121 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:16.121 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:16.378 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:16.642 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:16.642 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:16.642 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:16.642 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:16.642 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:16.642 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:16.642 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:16.642 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:16.642 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:16.642 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:16.642 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:16.642 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:16.642 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:16.642 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:16.642 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:16.642 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:16.642 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:16.642 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:16.642 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:16.642 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:16.642 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.642 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.642 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6565832 kB' 'MemAvailable: 9500832 kB' 'Buffers: 2436 kB' 'Cached: 3136252 kB' 'SwapCached: 0 kB' 'Active: 493860 kB' 'Inactive: 2765996 kB' 'Active(anon): 131636 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2765996 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 122700 kB' 'Mapped: 48716 kB' 'Shmem: 10468 kB' 'KReclaimable: 87888 kB' 'Slab: 168332 kB' 'SReclaimable: 87888 kB' 'SUnreclaim: 80444 kB' 'KernelStack: 6556 kB' 'PageTables: 4320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 362224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54948 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:05:16.642 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.642 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.642 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.642 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.642 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.642 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.642 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.642 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.642 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.642 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.642 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.642 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.642 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.642 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.642 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.642 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.642 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.642 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.642 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.642 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.642 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.642 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.642 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.642 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.642 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.642 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.642 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.642 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.642 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.642 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.642 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.642 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.642 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.642 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.642 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.642 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.642 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.642 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.642 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.642 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.642 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.642 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.642 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.642 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.642 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.642 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.642 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.642 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.642 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.642 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.642 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.642 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.642 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.642 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.642 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.642 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.642 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.642 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.642 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.642 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.642 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.642 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.642 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.642 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.642 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.642 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.642 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.642 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.642 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.642 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6565956 kB' 'MemAvailable: 9500956 kB' 'Buffers: 2436 kB' 'Cached: 3136252 kB' 'SwapCached: 0 kB' 'Active: 493508 kB' 'Inactive: 2765996 kB' 'Active(anon): 131284 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2765996 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 122364 kB' 'Mapped: 48604 kB' 'Shmem: 10468 kB' 'KReclaimable: 87888 kB' 'Slab: 168336 kB' 'SReclaimable: 87888 kB' 'SUnreclaim: 80448 kB' 'KernelStack: 6616 kB' 'PageTables: 4440 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 362224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54916 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.643 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.644 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6565956 kB' 'MemAvailable: 9500956 kB' 'Buffers: 2436 kB' 'Cached: 3136252 kB' 'SwapCached: 0 kB' 'Active: 493256 kB' 'Inactive: 2765996 kB' 'Active(anon): 131032 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2765996 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 122396 kB' 'Mapped: 48604 kB' 'Shmem: 10468 kB' 'KReclaimable: 87888 kB' 'Slab: 168336 kB' 'SReclaimable: 87888 kB' 'SUnreclaim: 80448 kB' 'KernelStack: 6632 kB' 'PageTables: 4492 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 362224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54916 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.645 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.646 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:16.647 nr_hugepages=1024 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:16.647 resv_hugepages=0 00:05:16.647 surplus_hugepages=0 00:05:16.647 anon_hugepages=0 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6565956 kB' 'MemAvailable: 9500956 kB' 'Buffers: 2436 kB' 'Cached: 3136252 kB' 'SwapCached: 0 kB' 'Active: 493236 kB' 'Inactive: 2765996 kB' 'Active(anon): 131012 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2765996 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 122300 kB' 'Mapped: 48604 kB' 'Shmem: 10468 kB' 'KReclaimable: 87888 kB' 'Slab: 168336 kB' 'SReclaimable: 87888 kB' 'SUnreclaim: 80448 kB' 'KernelStack: 6616 kB' 'PageTables: 4440 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 362224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54916 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.647 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.648 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6566208 kB' 'MemUsed: 5675764 kB' 'SwapCached: 0 kB' 'Active: 493340 kB' 'Inactive: 2765992 kB' 'Active(anon): 131116 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2765992 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'FilePages: 3138684 kB' 'Mapped: 48604 kB' 'AnonPages: 122568 kB' 'Shmem: 10468 kB' 'KernelStack: 6616 kB' 'PageTables: 4456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 87888 kB' 'Slab: 168332 kB' 'SReclaimable: 87888 kB' 'SUnreclaim: 80444 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.649 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.650 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.650 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.650 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.650 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.650 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.650 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.650 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.650 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.650 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.650 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.650 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.650 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.650 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.650 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.650 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.650 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.650 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.650 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.650 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.650 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.650 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.650 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.650 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.650 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.650 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.650 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.650 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.650 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.650 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.650 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.650 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.650 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.650 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.650 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.650 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.650 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.650 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.650 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.650 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.650 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.650 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.650 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.650 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.650 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.650 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.650 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.650 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.650 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.650 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.650 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.650 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.650 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.650 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.650 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.650 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.650 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.650 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.650 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.650 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.650 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.650 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.650 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.650 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.650 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.650 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.650 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.650 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.650 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.650 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.650 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.650 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.650 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:16.650 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:16.650 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:16.650 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.650 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:16.650 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:16.650 node0=1024 expecting 1024 00:05:16.650 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:16.650 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:16.650 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:16.650 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:16.650 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:16.650 20:05:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:16.650 00:05:16.650 real 0m0.573s 00:05:16.650 user 0m0.276s 00:05:16.650 sys 0m0.313s 00:05:16.650 20:05:05 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:16.650 ************************************ 00:05:16.650 END TEST even_2G_alloc 00:05:16.650 ************************************ 00:05:16.650 20:05:05 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:16.650 20:05:05 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:16.650 20:05:05 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:16.650 20:05:05 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:16.650 20:05:05 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:16.909 ************************************ 00:05:16.909 START TEST odd_alloc 00:05:16.909 ************************************ 00:05:16.909 20:05:05 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1121 -- # odd_alloc 00:05:16.909 20:05:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:16.909 20:05:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:05:16.909 20:05:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:16.909 20:05:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:16.909 20:05:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:16.909 20:05:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:16.909 20:05:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:16.909 20:05:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:16.909 20:05:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:16.909 20:05:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:16.909 20:05:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:16.909 20:05:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:16.909 20:05:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:16.909 20:05:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:16.909 20:05:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:16.909 20:05:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:05:16.909 20:05:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:16.909 20:05:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:16.909 20:05:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:16.909 20:05:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:16.909 20:05:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:16.909 20:05:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:05:16.909 20:05:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:16.909 20:05:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:17.171 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:17.171 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:17.171 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:17.171 20:05:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:17.171 20:05:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:05:17.171 20:05:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:17.171 20:05:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:17.171 20:05:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:17.171 20:05:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:17.171 20:05:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:17.171 20:05:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:17.171 20:05:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:17.171 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:17.171 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:17.171 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:17.171 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:17.171 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.171 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:17.171 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:17.171 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.171 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.171 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.171 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6568660 kB' 'MemAvailable: 9503660 kB' 'Buffers: 2436 kB' 'Cached: 3136252 kB' 'SwapCached: 0 kB' 'Active: 493184 kB' 'Inactive: 2765996 kB' 'Active(anon): 130960 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2765996 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 122328 kB' 'Mapped: 48696 kB' 'Shmem: 10468 kB' 'KReclaimable: 87888 kB' 'Slab: 168388 kB' 'SReclaimable: 87888 kB' 'SUnreclaim: 80500 kB' 'KernelStack: 6580 kB' 'PageTables: 4020 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 362224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54948 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:05:17.171 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.171 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.171 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.171 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.171 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.171 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.171 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.171 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.171 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.171 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.171 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.171 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.171 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.171 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.171 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.171 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.171 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.171 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.171 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.171 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.171 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.171 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.171 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.171 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.171 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.171 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.171 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.171 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.171 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.171 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.171 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.171 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.171 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.171 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.171 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.171 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.171 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.171 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.171 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.171 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.171 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.171 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.171 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.171 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.171 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.171 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.171 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.171 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.171 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.171 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.171 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.171 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.171 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.171 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.171 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.171 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.171 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.171 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.171 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.171 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.171 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.171 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.171 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.171 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6568408 kB' 'MemAvailable: 9503408 kB' 'Buffers: 2436 kB' 'Cached: 3136252 kB' 'SwapCached: 0 kB' 'Active: 493380 kB' 'Inactive: 2765996 kB' 'Active(anon): 131156 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2765996 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 122340 kB' 'Mapped: 48636 kB' 'Shmem: 10468 kB' 'KReclaimable: 87888 kB' 'Slab: 168396 kB' 'SReclaimable: 87888 kB' 'SUnreclaim: 80508 kB' 'KernelStack: 6592 kB' 'PageTables: 4252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 362224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54932 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.172 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.173 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6568408 kB' 'MemAvailable: 9503408 kB' 'Buffers: 2436 kB' 'Cached: 3136252 kB' 'SwapCached: 0 kB' 'Active: 493144 kB' 'Inactive: 2765996 kB' 'Active(anon): 130920 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2765996 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 122028 kB' 'Mapped: 48636 kB' 'Shmem: 10468 kB' 'KReclaimable: 87888 kB' 'Slab: 168396 kB' 'SReclaimable: 87888 kB' 'SUnreclaim: 80508 kB' 'KernelStack: 6592 kB' 'PageTables: 4252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 362224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54948 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.174 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.175 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.176 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.176 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.176 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.176 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.176 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.176 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.176 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.176 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.176 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.176 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.176 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.176 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.176 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.176 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.176 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.176 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.176 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.176 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.176 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.176 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.176 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.176 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.176 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.176 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.176 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.176 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.176 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.176 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.176 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.176 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.176 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.176 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.176 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.176 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.176 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.176 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.176 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.176 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.176 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.176 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.176 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.176 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.176 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.176 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.176 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.176 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.176 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.176 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.176 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.176 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:17.176 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:17.176 nr_hugepages=1025 00:05:17.176 resv_hugepages=0 00:05:17.176 surplus_hugepages=0 00:05:17.176 anon_hugepages=0 00:05:17.176 20:05:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:17.176 20:05:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:17.176 20:05:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:17.176 20:05:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:17.176 20:05:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:17.176 20:05:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:17.176 20:05:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:17.176 20:05:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:17.176 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:17.176 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:17.176 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:17.176 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:17.176 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.176 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:17.176 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:17.176 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.176 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.176 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.176 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.176 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6568408 kB' 'MemAvailable: 9503408 kB' 'Buffers: 2436 kB' 'Cached: 3136252 kB' 'SwapCached: 0 kB' 'Active: 493096 kB' 'Inactive: 2765996 kB' 'Active(anon): 130872 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2765996 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 122012 kB' 'Mapped: 48636 kB' 'Shmem: 10468 kB' 'KReclaimable: 87888 kB' 'Slab: 168396 kB' 'SReclaimable: 87888 kB' 'SUnreclaim: 80508 kB' 'KernelStack: 6592 kB' 'PageTables: 4252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 362224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54948 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:05:17.176 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.176 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.176 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.176 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.437 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.437 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.437 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.437 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.437 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.437 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.437 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.437 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.437 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.437 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.437 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.437 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.437 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.437 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.437 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.437 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.437 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.437 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.437 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.437 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.437 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.437 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.437 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.437 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.437 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.437 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.437 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.437 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.437 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.437 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.437 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.437 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.437 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.437 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.437 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.437 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.437 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.437 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.437 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.437 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.437 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.437 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.437 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.437 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.437 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.437 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.437 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.437 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.437 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.437 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.437 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.437 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.437 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.437 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.437 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.437 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.437 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.437 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.437 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.437 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.437 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.437 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.437 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.437 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.437 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.437 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.437 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.437 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.437 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.437 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.437 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.437 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.437 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.437 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.437 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.437 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.437 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.437 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.437 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.437 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.437 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.437 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.437 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.437 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.437 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.437 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.437 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.437 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.437 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.437 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.437 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.437 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.438 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6568408 kB' 'MemUsed: 5673564 kB' 'SwapCached: 0 kB' 'Active: 493056 kB' 'Inactive: 2765996 kB' 'Active(anon): 130832 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2765996 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'FilePages: 3138688 kB' 'Mapped: 48636 kB' 'AnonPages: 122276 kB' 'Shmem: 10468 kB' 'KernelStack: 6592 kB' 'PageTables: 4252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 87888 kB' 'Slab: 168392 kB' 'SReclaimable: 87888 kB' 'SUnreclaim: 80504 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.439 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.440 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.440 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.440 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.440 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.440 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.440 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.440 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.440 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.440 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.440 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.440 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:17.440 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.440 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.440 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.440 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:17.440 20:05:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:17.440 20:05:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:17.440 20:05:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:17.440 20:05:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:17.440 20:05:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:17.440 20:05:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:05:17.440 node0=1025 expecting 1025 00:05:17.440 20:05:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:05:17.440 ************************************ 00:05:17.440 END TEST odd_alloc 00:05:17.440 ************************************ 00:05:17.440 00:05:17.440 real 0m0.583s 00:05:17.440 user 0m0.259s 00:05:17.440 sys 0m0.324s 00:05:17.440 20:05:06 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:17.440 20:05:06 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:17.440 20:05:06 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:17.440 20:05:06 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:17.440 20:05:06 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:17.440 20:05:06 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:17.440 ************************************ 00:05:17.440 START TEST custom_alloc 00:05:17.440 ************************************ 00:05:17.440 20:05:06 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1121 -- # custom_alloc 00:05:17.440 20:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:05:17.440 20:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:05:17.440 20:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:17.440 20:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:17.440 20:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:17.440 20:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:17.440 20:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:17.440 20:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:17.440 20:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:17.440 20:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:17.440 20:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:17.440 20:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:17.440 20:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:17.440 20:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:17.440 20:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:17.440 20:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:17.440 20:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:17.440 20:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:17.440 20:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:17.440 20:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:17.440 20:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:17.440 20:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:17.440 20:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:17.440 20:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:17.440 20:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:17.440 20:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:05:17.440 20:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:17.440 20:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:17.440 20:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:17.440 20:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:17.440 20:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:17.440 20:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:17.440 20:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:17.440 20:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:17.440 20:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:17.440 20:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:17.440 20:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:17.440 20:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:17.440 20:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:17.440 20:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:17.440 20:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:05:17.440 20:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:05:17.440 20:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:05:17.440 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:17.440 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:17.699 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:17.699 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:17.699 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:17.699 20:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:05:17.699 20:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:17.699 20:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:05:17.699 20:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:17.699 20:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:17.699 20:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:17.699 20:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:17.699 20:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:17.699 20:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:17.699 20:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:17.699 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:17.699 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:17.699 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:17.699 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:17.699 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.699 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:17.699 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:17.699 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.699 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.699 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.699 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7614252 kB' 'MemAvailable: 10549252 kB' 'Buffers: 2436 kB' 'Cached: 3136252 kB' 'SwapCached: 0 kB' 'Active: 493848 kB' 'Inactive: 2765996 kB' 'Active(anon): 131624 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2765996 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 122540 kB' 'Mapped: 48764 kB' 'Shmem: 10468 kB' 'KReclaimable: 87888 kB' 'Slab: 168420 kB' 'SReclaimable: 87888 kB' 'SUnreclaim: 80532 kB' 'KernelStack: 6596 kB' 'PageTables: 4336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 362224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54948 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:05:17.699 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.699 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.699 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.699 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.699 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.699 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.699 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.699 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.700 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.700 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.700 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.700 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.700 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.700 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.700 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.700 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.700 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.700 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.700 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.700 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.700 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.700 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.700 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.963 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.963 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.963 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.963 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.963 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.963 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.963 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.963 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.963 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.963 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.963 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.963 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.963 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.963 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.963 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.963 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.963 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.963 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.963 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.963 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.963 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.963 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.963 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.963 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.963 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.963 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.963 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.963 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.963 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.963 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.963 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.963 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.963 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.963 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.963 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.963 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.963 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.963 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.963 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.963 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.963 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.963 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.963 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.963 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.963 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.963 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.963 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.963 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.963 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.963 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.963 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.963 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.963 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.963 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.963 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.963 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.963 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.963 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.963 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.963 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.963 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.963 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.963 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.963 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.963 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.963 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.963 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.963 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.963 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.963 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.963 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.963 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.963 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.963 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.963 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.963 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.963 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.963 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.963 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.963 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.963 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.963 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.963 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.963 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.963 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.963 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.963 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.963 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.963 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.963 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.963 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.963 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.963 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.963 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.963 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.963 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.963 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.963 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.963 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.963 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.963 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.963 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7614252 kB' 'MemAvailable: 10549252 kB' 'Buffers: 2436 kB' 'Cached: 3136252 kB' 'SwapCached: 0 kB' 'Active: 493232 kB' 'Inactive: 2765996 kB' 'Active(anon): 131008 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2765996 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 122428 kB' 'Mapped: 48636 kB' 'Shmem: 10468 kB' 'KReclaimable: 87888 kB' 'Slab: 168416 kB' 'SReclaimable: 87888 kB' 'SUnreclaim: 80528 kB' 'KernelStack: 6592 kB' 'PageTables: 4252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 362224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54916 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.964 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.965 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7614252 kB' 'MemAvailable: 10549252 kB' 'Buffers: 2436 kB' 'Cached: 3136252 kB' 'SwapCached: 0 kB' 'Active: 493072 kB' 'Inactive: 2765996 kB' 'Active(anon): 130848 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2765996 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 122208 kB' 'Mapped: 48636 kB' 'Shmem: 10468 kB' 'KReclaimable: 87888 kB' 'Slab: 168416 kB' 'SReclaimable: 87888 kB' 'SUnreclaim: 80528 kB' 'KernelStack: 6576 kB' 'PageTables: 4196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 362224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54932 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.966 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.967 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.967 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.967 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.967 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.967 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.967 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.967 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.967 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.967 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.967 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.967 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.967 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.967 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.967 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.967 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.967 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.967 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.967 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.967 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.967 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.967 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.967 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.967 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.967 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.967 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.967 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.967 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.967 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.967 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.967 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.967 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.967 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.967 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.967 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.967 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.967 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.967 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.967 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.967 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.967 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.967 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.967 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.967 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.967 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.967 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.967 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.967 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.967 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.967 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.967 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.967 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.967 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.967 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.967 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.967 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.967 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.967 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.967 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.967 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.967 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.967 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.967 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.967 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.967 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.967 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.967 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.967 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.967 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.967 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.967 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.967 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.967 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.967 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.967 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.967 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.967 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.967 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.967 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.967 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.967 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.967 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.967 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.967 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.967 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.967 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.967 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.967 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.967 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.967 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.967 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.967 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.967 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.967 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.967 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.967 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.967 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:17.968 nr_hugepages=512 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:17.968 resv_hugepages=0 00:05:17.968 surplus_hugepages=0 00:05:17.968 anon_hugepages=0 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7614252 kB' 'MemAvailable: 10549252 kB' 'Buffers: 2436 kB' 'Cached: 3136252 kB' 'SwapCached: 0 kB' 'Active: 493152 kB' 'Inactive: 2765996 kB' 'Active(anon): 130928 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2765996 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 122296 kB' 'Mapped: 48636 kB' 'Shmem: 10468 kB' 'KReclaimable: 87888 kB' 'Slab: 168416 kB' 'SReclaimable: 87888 kB' 'SUnreclaim: 80528 kB' 'KernelStack: 6592 kB' 'PageTables: 4248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 362224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54932 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.968 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:17.969 20:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7614252 kB' 'MemUsed: 4627720 kB' 'SwapCached: 0 kB' 'Active: 493124 kB' 'Inactive: 2765996 kB' 'Active(anon): 130900 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2765996 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'FilePages: 3138688 kB' 'Mapped: 48636 kB' 'AnonPages: 122296 kB' 'Shmem: 10468 kB' 'KernelStack: 6592 kB' 'PageTables: 4248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 87888 kB' 'Slab: 168412 kB' 'SReclaimable: 87888 kB' 'SUnreclaim: 80524 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.970 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.971 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.971 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.971 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.971 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.971 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.971 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.971 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.971 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.971 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.971 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.971 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.971 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.971 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.971 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.971 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.971 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.971 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.971 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.971 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.971 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.971 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.971 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.971 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.971 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.971 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.971 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.971 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.971 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.971 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.971 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.971 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.971 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.971 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.971 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:17.971 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:17.971 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:17.971 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.971 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:17.971 20:05:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:17.971 20:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:17.971 20:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:17.971 20:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:17.971 20:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:17.971 node0=512 expecting 512 00:05:17.971 20:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:17.971 20:05:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:17.971 ************************************ 00:05:17.971 END TEST custom_alloc 00:05:17.971 ************************************ 00:05:17.971 00:05:17.971 real 0m0.589s 00:05:17.971 user 0m0.285s 00:05:17.971 sys 0m0.307s 00:05:17.971 20:05:06 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:17.971 20:05:06 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:17.971 20:05:06 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:17.971 20:05:06 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:17.971 20:05:06 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:17.971 20:05:06 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:17.971 ************************************ 00:05:17.971 START TEST no_shrink_alloc 00:05:17.971 ************************************ 00:05:17.971 20:05:07 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1121 -- # no_shrink_alloc 00:05:17.971 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:17.971 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:17.971 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:17.971 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:05:17.971 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:17.971 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:17.971 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:17.971 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:17.971 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:17.971 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:17.971 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:17.971 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:17.971 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:17.971 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:17.971 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:17.971 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:17.971 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:17.971 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:17.971 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:17.971 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:05:17.971 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:17.971 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:18.545 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:18.545 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:18.545 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:18.545 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:18.545 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:18.545 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:18.545 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:18.545 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:18.545 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:18.545 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:18.545 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:18.545 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:18.545 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:18.545 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:18.545 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:18.545 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:18.545 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:18.545 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:18.545 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:18.545 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:18.545 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:18.545 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.545 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6560868 kB' 'MemAvailable: 9495864 kB' 'Buffers: 2436 kB' 'Cached: 3136248 kB' 'SwapCached: 0 kB' 'Active: 493852 kB' 'Inactive: 2765992 kB' 'Active(anon): 131628 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2765992 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 122552 kB' 'Mapped: 48716 kB' 'Shmem: 10468 kB' 'KReclaimable: 87888 kB' 'Slab: 168404 kB' 'SReclaimable: 87888 kB' 'SUnreclaim: 80516 kB' 'KernelStack: 6556 kB' 'PageTables: 4584 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 362224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54948 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:05:18.545 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.545 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.545 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.545 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.545 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.545 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.545 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.545 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.545 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.545 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.545 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.545 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.545 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.545 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.546 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6560868 kB' 'MemAvailable: 9495864 kB' 'Buffers: 2436 kB' 'Cached: 3136248 kB' 'SwapCached: 0 kB' 'Active: 493476 kB' 'Inactive: 2765992 kB' 'Active(anon): 131252 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2765992 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 122460 kB' 'Mapped: 48624 kB' 'Shmem: 10468 kB' 'KReclaimable: 87888 kB' 'Slab: 168404 kB' 'SReclaimable: 87888 kB' 'SUnreclaim: 80516 kB' 'KernelStack: 6584 kB' 'PageTables: 4584 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 362224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54932 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.547 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.548 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.549 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:18.549 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:18.549 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:18.549 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:18.549 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:18.549 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:18.549 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:18.549 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:18.549 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:18.549 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:18.549 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:18.549 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:18.549 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:18.549 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.549 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.549 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6560868 kB' 'MemAvailable: 9495864 kB' 'Buffers: 2436 kB' 'Cached: 3136248 kB' 'SwapCached: 0 kB' 'Active: 488884 kB' 'Inactive: 2765992 kB' 'Active(anon): 126660 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2765992 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 117816 kB' 'Mapped: 47984 kB' 'Shmem: 10468 kB' 'KReclaimable: 87888 kB' 'Slab: 168372 kB' 'SReclaimable: 87888 kB' 'SUnreclaim: 80484 kB' 'KernelStack: 6536 kB' 'PageTables: 4372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 344808 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:05:18.549 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.549 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.549 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.549 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.549 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.549 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.549 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.549 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.549 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.549 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.549 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.549 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.549 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.549 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.549 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.549 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.549 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.549 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.549 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.549 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.549 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.549 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.549 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.549 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.549 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.549 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.549 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.549 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.549 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.549 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.549 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.549 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.549 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.549 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.549 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.549 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.549 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.549 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.549 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.549 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.549 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.549 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.549 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.549 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.549 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.549 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.549 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.549 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.549 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.549 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.549 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.549 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.549 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.549 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.549 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.549 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.549 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.549 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.549 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.549 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.549 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.549 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.549 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.549 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.549 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.549 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.549 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.549 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.549 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.549 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.549 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.549 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.549 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.549 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.549 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.549 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.549 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.549 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.549 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.550 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:18.551 nr_hugepages=1024 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:18.551 resv_hugepages=0 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:18.551 surplus_hugepages=0 00:05:18.551 anon_hugepages=0 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6560868 kB' 'MemAvailable: 9495856 kB' 'Buffers: 2436 kB' 'Cached: 3136248 kB' 'SwapCached: 0 kB' 'Active: 488440 kB' 'Inactive: 2765992 kB' 'Active(anon): 126216 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2765992 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 117400 kB' 'Mapped: 47984 kB' 'Shmem: 10468 kB' 'KReclaimable: 87868 kB' 'Slab: 168324 kB' 'SReclaimable: 87868 kB' 'SUnreclaim: 80456 kB' 'KernelStack: 6488 kB' 'PageTables: 4172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 344808 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.551 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.552 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6561268 kB' 'MemUsed: 5680704 kB' 'SwapCached: 0 kB' 'Active: 488460 kB' 'Inactive: 2765992 kB' 'Active(anon): 126236 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2765992 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'FilePages: 3138684 kB' 'Mapped: 47984 kB' 'AnonPages: 117656 kB' 'Shmem: 10468 kB' 'KernelStack: 6472 kB' 'PageTables: 4120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 87868 kB' 'Slab: 168320 kB' 'SReclaimable: 87868 kB' 'SUnreclaim: 80452 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.553 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.554 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.554 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.554 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.554 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.554 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.554 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.554 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.554 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.554 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.554 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.554 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.554 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.554 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.554 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.554 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.554 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.554 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.554 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.554 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.554 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.554 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.554 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.554 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:18.554 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.554 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.554 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.554 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:18.554 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:18.554 node0=1024 expecting 1024 00:05:18.554 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:18.554 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:18.554 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:18.554 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:18.554 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:18.554 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:18.554 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:18.554 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:18.554 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:05:18.554 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:18.554 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:19.158 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:19.158 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:19.158 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:19.158 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:19.158 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:19.158 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:19.158 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:19.158 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:19.158 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:19.158 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:19.158 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:19.158 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:19.158 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:19.158 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:19.158 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:19.158 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:19.158 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:19.158 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.158 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:19.158 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:19.158 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.158 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.158 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.158 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6562792 kB' 'MemAvailable: 9497788 kB' 'Buffers: 2436 kB' 'Cached: 3136256 kB' 'SwapCached: 0 kB' 'Active: 488492 kB' 'Inactive: 2766000 kB' 'Active(anon): 126268 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2766000 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 117644 kB' 'Mapped: 47964 kB' 'Shmem: 10468 kB' 'KReclaimable: 87868 kB' 'Slab: 168192 kB' 'SReclaimable: 87868 kB' 'SUnreclaim: 80324 kB' 'KernelStack: 6452 kB' 'PageTables: 3756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 344808 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:05:19.158 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.158 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.158 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.158 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.158 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.158 20:05:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.158 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.158 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.158 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.158 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.158 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.158 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.158 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.158 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.158 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.158 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.158 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.158 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.158 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.158 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.158 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.158 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.158 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.158 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.158 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.158 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.158 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.158 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.158 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.158 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.158 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.158 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.158 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.158 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.158 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.158 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.158 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.158 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.158 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.158 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.158 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.158 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.158 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.158 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.158 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.158 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.158 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.158 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.158 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.158 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.158 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.158 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.158 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.158 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.158 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.158 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.158 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.158 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.158 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.158 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.158 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.158 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.158 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.158 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.158 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.158 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.158 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.158 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.158 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.158 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.158 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.158 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.158 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.158 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.158 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6562916 kB' 'MemAvailable: 9497912 kB' 'Buffers: 2436 kB' 'Cached: 3136256 kB' 'SwapCached: 0 kB' 'Active: 488488 kB' 'Inactive: 2766000 kB' 'Active(anon): 126264 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2766000 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 117368 kB' 'Mapped: 47864 kB' 'Shmem: 10468 kB' 'KReclaimable: 87868 kB' 'Slab: 168192 kB' 'SReclaimable: 87868 kB' 'SUnreclaim: 80324 kB' 'KernelStack: 6464 kB' 'PageTables: 3692 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 344808 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.159 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.160 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6562916 kB' 'MemAvailable: 9497912 kB' 'Buffers: 2436 kB' 'Cached: 3136256 kB' 'SwapCached: 0 kB' 'Active: 488844 kB' 'Inactive: 2766000 kB' 'Active(anon): 126620 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2766000 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 117768 kB' 'Mapped: 47864 kB' 'Shmem: 10468 kB' 'KReclaimable: 87868 kB' 'Slab: 168192 kB' 'SReclaimable: 87868 kB' 'SUnreclaim: 80324 kB' 'KernelStack: 6448 kB' 'PageTables: 3656 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 351364 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.161 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.162 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:19.163 nr_hugepages=1024 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:19.163 resv_hugepages=0 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:19.163 surplus_hugepages=0 00:05:19.163 anon_hugepages=0 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6562916 kB' 'MemAvailable: 9497912 kB' 'Buffers: 2436 kB' 'Cached: 3136256 kB' 'SwapCached: 0 kB' 'Active: 488240 kB' 'Inactive: 2766000 kB' 'Active(anon): 126016 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2766000 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 117284 kB' 'Mapped: 47864 kB' 'Shmem: 10468 kB' 'KReclaimable: 87868 kB' 'Slab: 168188 kB' 'SReclaimable: 87868 kB' 'SUnreclaim: 80320 kB' 'KernelStack: 6496 kB' 'PageTables: 3796 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 344936 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.163 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.164 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6562916 kB' 'MemUsed: 5679056 kB' 'SwapCached: 0 kB' 'Active: 488204 kB' 'Inactive: 2766000 kB' 'Active(anon): 125980 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2766000 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'FilePages: 3138692 kB' 'Mapped: 47864 kB' 'AnonPages: 117420 kB' 'Shmem: 10468 kB' 'KernelStack: 6448 kB' 'PageTables: 3644 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 87868 kB' 'Slab: 168188 kB' 'SReclaimable: 87868 kB' 'SUnreclaim: 80320 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.165 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:19.166 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:19.166 node0=1024 expecting 1024 00:05:19.167 20:05:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:19.167 00:05:19.167 real 0m1.164s 00:05:19.167 user 0m0.557s 00:05:19.167 sys 0m0.589s 00:05:19.167 20:05:08 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:19.167 20:05:08 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:19.167 ************************************ 00:05:19.167 END TEST no_shrink_alloc 00:05:19.167 ************************************ 00:05:19.167 20:05:08 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:05:19.167 20:05:08 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:19.167 20:05:08 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:19.167 20:05:08 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:19.167 20:05:08 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:19.167 20:05:08 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:19.167 20:05:08 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:19.167 20:05:08 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:19.167 20:05:08 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:19.167 ************************************ 00:05:19.167 END TEST hugepages 00:05:19.167 ************************************ 00:05:19.167 00:05:19.167 real 0m4.933s 00:05:19.167 user 0m2.273s 00:05:19.167 sys 0m2.577s 00:05:19.167 20:05:08 setup.sh.hugepages -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:19.167 20:05:08 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:19.425 20:05:08 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:19.425 20:05:08 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:19.425 20:05:08 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:19.425 20:05:08 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:19.425 ************************************ 00:05:19.425 START TEST driver 00:05:19.425 ************************************ 00:05:19.425 20:05:08 setup.sh.driver -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:19.425 * Looking for test storage... 00:05:19.425 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:19.425 20:05:08 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:05:19.425 20:05:08 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:19.425 20:05:08 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:19.992 20:05:08 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:19.992 20:05:08 setup.sh.driver -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:19.992 20:05:08 setup.sh.driver -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:19.992 20:05:08 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:19.992 ************************************ 00:05:19.992 START TEST guess_driver 00:05:19.992 ************************************ 00:05:19.992 20:05:08 setup.sh.driver.guess_driver -- common/autotest_common.sh@1121 -- # guess_driver 00:05:19.992 20:05:08 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:19.992 20:05:08 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:05:19.992 20:05:08 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:05:19.992 20:05:08 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:05:19.992 20:05:08 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:05:19.992 20:05:08 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:19.992 20:05:08 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:19.992 20:05:08 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:19.992 20:05:08 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:05:19.992 20:05:08 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:05:19.992 20:05:08 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:05:19.992 20:05:08 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:05:19.992 20:05:08 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:05:19.992 20:05:08 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:05:19.992 20:05:08 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:05:19.992 20:05:08 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:05:19.992 20:05:08 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:05:19.992 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:05:19.992 20:05:08 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:05:19.992 Looking for driver=uio_pci_generic 00:05:19.992 20:05:08 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:05:19.992 20:05:08 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:19.992 20:05:08 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:05:19.992 20:05:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:19.992 20:05:08 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:05:19.992 20:05:08 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:05:19.992 20:05:08 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:20.559 20:05:09 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:05:20.559 20:05:09 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:05:20.559 20:05:09 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:20.818 20:05:09 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:20.818 20:05:09 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:20.818 20:05:09 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:20.818 20:05:09 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:20.818 20:05:09 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:20.818 20:05:09 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:20.818 20:05:09 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:20.818 20:05:09 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:05:20.818 20:05:09 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:20.818 20:05:09 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:21.385 00:05:21.385 real 0m1.450s 00:05:21.385 user 0m0.508s 00:05:21.385 sys 0m0.929s 00:05:21.385 20:05:10 setup.sh.driver.guess_driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:21.385 ************************************ 00:05:21.385 END TEST guess_driver 00:05:21.385 20:05:10 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:05:21.385 ************************************ 00:05:21.385 ************************************ 00:05:21.385 END TEST driver 00:05:21.385 ************************************ 00:05:21.385 00:05:21.385 real 0m2.156s 00:05:21.385 user 0m0.745s 00:05:21.385 sys 0m1.453s 00:05:21.385 20:05:10 setup.sh.driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:21.385 20:05:10 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:21.644 20:05:10 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:21.644 20:05:10 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:21.644 20:05:10 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:21.644 20:05:10 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:21.644 ************************************ 00:05:21.644 START TEST devices 00:05:21.644 ************************************ 00:05:21.644 20:05:10 setup.sh.devices -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:21.644 * Looking for test storage... 00:05:21.644 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:21.644 20:05:10 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:21.644 20:05:10 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:05:21.644 20:05:10 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:21.644 20:05:10 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:22.579 20:05:11 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:05:22.579 20:05:11 setup.sh.devices -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:05:22.579 20:05:11 setup.sh.devices -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:05:22.579 20:05:11 setup.sh.devices -- common/autotest_common.sh@1666 -- # local nvme bdf 00:05:22.579 20:05:11 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:22.579 20:05:11 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:05:22.579 20:05:11 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:05:22.580 20:05:11 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:22.580 20:05:11 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:22.580 20:05:11 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:22.580 20:05:11 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n2 00:05:22.580 20:05:11 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n2 00:05:22.580 20:05:11 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:05:22.580 20:05:11 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:22.580 20:05:11 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:22.580 20:05:11 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n3 00:05:22.580 20:05:11 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n3 00:05:22.580 20:05:11 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:05:22.580 20:05:11 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:22.580 20:05:11 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:22.580 20:05:11 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n1 00:05:22.580 20:05:11 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:05:22.580 20:05:11 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:22.580 20:05:11 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:22.580 20:05:11 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:05:22.580 20:05:11 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:05:22.580 20:05:11 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:22.580 20:05:11 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:22.580 20:05:11 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:22.580 20:05:11 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:22.580 20:05:11 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:22.580 20:05:11 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:22.580 20:05:11 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:05:22.580 20:05:11 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:05:22.580 20:05:11 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:22.580 20:05:11 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:05:22.580 20:05:11 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:05:22.580 No valid GPT data, bailing 00:05:22.580 20:05:11 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:22.580 20:05:11 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:22.580 20:05:11 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:22.580 20:05:11 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:22.580 20:05:11 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:22.580 20:05:11 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:22.580 20:05:11 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:05:22.580 20:05:11 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:22.580 20:05:11 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:22.580 20:05:11 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:05:22.580 20:05:11 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:22.580 20:05:11 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:05:22.580 20:05:11 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:22.580 20:05:11 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:05:22.580 20:05:11 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:05:22.580 20:05:11 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:05:22.580 20:05:11 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:05:22.580 20:05:11 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:05:22.580 No valid GPT data, bailing 00:05:22.580 20:05:11 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:05:22.580 20:05:11 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:22.580 20:05:11 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:22.580 20:05:11 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:05:22.580 20:05:11 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n2 00:05:22.580 20:05:11 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:05:22.580 20:05:11 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:05:22.580 20:05:11 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:22.580 20:05:11 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:22.580 20:05:11 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:05:22.580 20:05:11 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:22.580 20:05:11 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:05:22.580 20:05:11 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:22.580 20:05:11 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:05:22.580 20:05:11 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:05:22.580 20:05:11 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:05:22.580 20:05:11 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:05:22.580 20:05:11 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:05:22.580 No valid GPT data, bailing 00:05:22.580 20:05:11 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:05:22.580 20:05:11 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:22.580 20:05:11 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:22.580 20:05:11 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:05:22.580 20:05:11 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n3 00:05:22.580 20:05:11 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:05:22.580 20:05:11 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:05:22.580 20:05:11 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:22.580 20:05:11 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:22.580 20:05:11 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:05:22.580 20:05:11 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:22.580 20:05:11 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:05:22.580 20:05:11 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:22.580 20:05:11 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:05:22.580 20:05:11 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:05:22.580 20:05:11 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:05:22.580 20:05:11 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:05:22.580 20:05:11 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:05:22.580 No valid GPT data, bailing 00:05:22.580 20:05:11 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:22.580 20:05:11 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:22.580 20:05:11 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:22.580 20:05:11 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:05:22.580 20:05:11 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:05:22.580 20:05:11 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:05:22.580 20:05:11 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:05:22.580 20:05:11 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:05:22.580 20:05:11 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:22.580 20:05:11 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:05:22.580 20:05:11 setup.sh.devices -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:05:22.580 20:05:11 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:22.580 20:05:11 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:22.580 20:05:11 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:22.580 20:05:11 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:22.580 20:05:11 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:22.580 ************************************ 00:05:22.580 START TEST nvme_mount 00:05:22.580 ************************************ 00:05:22.580 20:05:11 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1121 -- # nvme_mount 00:05:22.580 20:05:11 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:22.580 20:05:11 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:22.580 20:05:11 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:22.580 20:05:11 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:22.580 20:05:11 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:22.580 20:05:11 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:22.580 20:05:11 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:05:22.580 20:05:11 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:22.580 20:05:11 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:22.580 20:05:11 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:05:22.580 20:05:11 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:05:22.580 20:05:11 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:22.580 20:05:11 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:22.580 20:05:11 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:22.580 20:05:11 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:22.580 20:05:11 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:22.580 20:05:11 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:22.580 20:05:11 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:22.580 20:05:11 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:23.957 Creating new GPT entries in memory. 00:05:23.957 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:23.957 other utilities. 00:05:23.957 20:05:12 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:23.957 20:05:12 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:23.957 20:05:12 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:23.957 20:05:12 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:23.957 20:05:12 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:24.891 Creating new GPT entries in memory. 00:05:24.891 The operation has completed successfully. 00:05:24.891 20:05:13 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:24.891 20:05:13 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:24.891 20:05:13 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 71223 00:05:24.891 20:05:13 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:24.891 20:05:13 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:05:24.891 20:05:13 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:24.891 20:05:13 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:24.891 20:05:13 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:24.891 20:05:13 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:24.891 20:05:13 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:24.891 20:05:13 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:24.891 20:05:13 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:24.891 20:05:13 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:24.891 20:05:13 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:24.891 20:05:13 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:24.891 20:05:13 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:24.891 20:05:13 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:24.891 20:05:13 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:24.891 20:05:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.891 20:05:13 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:24.891 20:05:13 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:24.891 20:05:13 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:24.891 20:05:13 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:25.149 20:05:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:25.149 20:05:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:25.149 20:05:13 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:25.149 20:05:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.149 20:05:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:25.149 20:05:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.149 20:05:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:25.149 20:05:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.149 20:05:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:25.149 20:05:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.406 20:05:14 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:25.406 20:05:14 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:25.406 20:05:14 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:25.406 20:05:14 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:25.406 20:05:14 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:25.406 20:05:14 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:05:25.406 20:05:14 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:25.406 20:05:14 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:25.406 20:05:14 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:25.406 20:05:14 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:25.406 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:25.406 20:05:14 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:25.406 20:05:14 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:25.663 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:25.663 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:25.663 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:25.663 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:25.663 20:05:14 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:05:25.663 20:05:14 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:05:25.663 20:05:14 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:25.663 20:05:14 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:25.663 20:05:14 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:25.663 20:05:14 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:25.663 20:05:14 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:25.663 20:05:14 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:25.663 20:05:14 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:25.663 20:05:14 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:25.663 20:05:14 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:25.663 20:05:14 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:25.663 20:05:14 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:25.663 20:05:14 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:25.663 20:05:14 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:25.663 20:05:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.663 20:05:14 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:25.663 20:05:14 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:25.663 20:05:14 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:25.663 20:05:14 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:25.920 20:05:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:25.920 20:05:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:25.920 20:05:14 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:25.920 20:05:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.920 20:05:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:25.920 20:05:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.920 20:05:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:25.920 20:05:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.192 20:05:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:26.192 20:05:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.192 20:05:15 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:26.192 20:05:15 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:26.192 20:05:15 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:26.192 20:05:15 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:26.192 20:05:15 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:26.192 20:05:15 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:26.192 20:05:15 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:05:26.192 20:05:15 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:26.192 20:05:15 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:26.192 20:05:15 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:26.192 20:05:15 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:05:26.192 20:05:15 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:26.192 20:05:15 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:26.192 20:05:15 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:26.192 20:05:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.192 20:05:15 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:26.192 20:05:15 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:26.192 20:05:15 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:26.192 20:05:15 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:26.459 20:05:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:26.459 20:05:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:26.459 20:05:15 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:26.459 20:05:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.459 20:05:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:26.459 20:05:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.459 20:05:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:26.459 20:05:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.718 20:05:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:26.718 20:05:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.718 20:05:15 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:26.718 20:05:15 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:26.718 20:05:15 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:26.718 20:05:15 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:26.718 20:05:15 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:26.718 20:05:15 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:26.718 20:05:15 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:26.718 20:05:15 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:26.718 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:26.718 00:05:26.718 real 0m4.037s 00:05:26.718 user 0m0.711s 00:05:26.718 sys 0m1.066s 00:05:26.718 20:05:15 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:26.718 20:05:15 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:26.718 ************************************ 00:05:26.718 END TEST nvme_mount 00:05:26.718 ************************************ 00:05:26.718 20:05:15 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:26.718 20:05:15 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:26.718 20:05:15 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:26.718 20:05:15 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:26.718 ************************************ 00:05:26.718 START TEST dm_mount 00:05:26.718 ************************************ 00:05:26.718 20:05:15 setup.sh.devices.dm_mount -- common/autotest_common.sh@1121 -- # dm_mount 00:05:26.718 20:05:15 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:26.718 20:05:15 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:26.718 20:05:15 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:26.718 20:05:15 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:26.718 20:05:15 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:26.718 20:05:15 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:26.718 20:05:15 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:26.718 20:05:15 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:26.718 20:05:15 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:26.718 20:05:15 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:26.718 20:05:15 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:26.718 20:05:15 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:26.718 20:05:15 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:26.718 20:05:15 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:26.718 20:05:15 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:26.718 20:05:15 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:26.718 20:05:15 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:26.718 20:05:15 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:26.718 20:05:15 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:26.718 20:05:15 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:26.718 20:05:15 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:28.094 Creating new GPT entries in memory. 00:05:28.094 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:28.094 other utilities. 00:05:28.094 20:05:16 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:28.094 20:05:16 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:28.094 20:05:16 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:28.094 20:05:16 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:28.094 20:05:16 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:29.050 Creating new GPT entries in memory. 00:05:29.050 The operation has completed successfully. 00:05:29.050 20:05:17 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:29.050 20:05:17 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:29.050 20:05:17 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:29.050 20:05:17 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:29.050 20:05:17 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:05:29.984 The operation has completed successfully. 00:05:29.984 20:05:18 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:29.984 20:05:18 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:29.984 20:05:18 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 71656 00:05:29.984 20:05:18 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:29.984 20:05:18 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:29.984 20:05:18 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:29.984 20:05:18 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:29.984 20:05:18 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:29.984 20:05:18 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:29.984 20:05:18 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:29.984 20:05:18 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:29.984 20:05:18 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:29.984 20:05:18 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:29.984 20:05:18 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:05:29.984 20:05:18 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:29.984 20:05:18 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:29.984 20:05:18 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:29.984 20:05:18 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:05:29.984 20:05:18 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:29.985 20:05:18 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:29.985 20:05:18 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:29.985 20:05:18 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:29.985 20:05:18 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:29.985 20:05:18 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:29.985 20:05:18 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:29.985 20:05:18 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:29.985 20:05:18 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:29.985 20:05:18 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:29.985 20:05:18 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:29.985 20:05:18 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:29.985 20:05:18 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:29.985 20:05:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.985 20:05:18 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:29.985 20:05:18 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:29.985 20:05:18 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:29.985 20:05:18 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:30.242 20:05:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:30.242 20:05:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:30.242 20:05:19 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:30.242 20:05:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.242 20:05:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:30.242 20:05:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.242 20:05:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:30.242 20:05:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.500 20:05:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:30.500 20:05:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.500 20:05:19 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:30.500 20:05:19 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:05:30.500 20:05:19 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:30.500 20:05:19 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:30.500 20:05:19 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:30.500 20:05:19 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:30.500 20:05:19 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:30.500 20:05:19 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:30.500 20:05:19 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:30.500 20:05:19 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:30.500 20:05:19 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:30.500 20:05:19 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:30.500 20:05:19 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:30.500 20:05:19 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:30.500 20:05:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.500 20:05:19 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:30.500 20:05:19 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:30.500 20:05:19 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:30.500 20:05:19 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:30.758 20:05:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:30.758 20:05:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:30.758 20:05:19 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:30.758 20:05:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.758 20:05:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:30.758 20:05:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.758 20:05:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:30.758 20:05:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.015 20:05:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:31.015 20:05:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.015 20:05:19 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:31.015 20:05:19 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:31.015 20:05:19 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:31.015 20:05:19 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:31.015 20:05:19 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:31.015 20:05:19 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:31.015 20:05:19 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:31.015 20:05:19 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:31.015 20:05:19 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:31.015 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:31.015 20:05:19 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:31.015 20:05:19 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:31.015 00:05:31.015 real 0m4.260s 00:05:31.015 user 0m0.449s 00:05:31.015 sys 0m0.761s 00:05:31.015 20:05:20 setup.sh.devices.dm_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:31.015 20:05:20 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:31.015 ************************************ 00:05:31.015 END TEST dm_mount 00:05:31.015 ************************************ 00:05:31.015 20:05:20 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:31.015 20:05:20 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:31.015 20:05:20 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:31.015 20:05:20 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:31.015 20:05:20 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:31.016 20:05:20 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:31.016 20:05:20 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:31.273 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:31.273 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:31.273 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:31.273 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:31.273 20:05:20 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:31.273 20:05:20 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:31.273 20:05:20 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:31.273 20:05:20 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:31.273 20:05:20 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:31.273 20:05:20 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:31.273 20:05:20 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:31.273 ************************************ 00:05:31.273 END TEST devices 00:05:31.273 ************************************ 00:05:31.273 00:05:31.273 real 0m9.862s 00:05:31.273 user 0m1.797s 00:05:31.273 sys 0m2.456s 00:05:31.273 20:05:20 setup.sh.devices -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:31.273 20:05:20 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:31.531 ************************************ 00:05:31.531 END TEST setup.sh 00:05:31.531 ************************************ 00:05:31.531 00:05:31.531 real 0m22.122s 00:05:31.531 user 0m7.044s 00:05:31.531 sys 0m9.360s 00:05:31.531 20:05:20 setup.sh -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:31.531 20:05:20 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:31.531 20:05:20 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:32.096 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:32.096 Hugepages 00:05:32.096 node hugesize free / total 00:05:32.096 node0 1048576kB 0 / 0 00:05:32.096 node0 2048kB 2048 / 2048 00:05:32.096 00:05:32.097 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:32.097 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:32.354 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:05:32.354 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:05:32.354 20:05:21 -- spdk/autotest.sh@130 -- # uname -s 00:05:32.354 20:05:21 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:32.354 20:05:21 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:32.354 20:05:21 -- common/autotest_common.sh@1527 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:32.921 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:33.179 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:33.179 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:33.179 20:05:22 -- common/autotest_common.sh@1528 -- # sleep 1 00:05:34.553 20:05:23 -- common/autotest_common.sh@1529 -- # bdfs=() 00:05:34.553 20:05:23 -- common/autotest_common.sh@1529 -- # local bdfs 00:05:34.553 20:05:23 -- common/autotest_common.sh@1530 -- # bdfs=($(get_nvme_bdfs)) 00:05:34.553 20:05:23 -- common/autotest_common.sh@1530 -- # get_nvme_bdfs 00:05:34.553 20:05:23 -- common/autotest_common.sh@1509 -- # bdfs=() 00:05:34.553 20:05:23 -- common/autotest_common.sh@1509 -- # local bdfs 00:05:34.553 20:05:23 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:34.553 20:05:23 -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:34.553 20:05:23 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:05:34.553 20:05:23 -- common/autotest_common.sh@1511 -- # (( 2 == 0 )) 00:05:34.553 20:05:23 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:34.553 20:05:23 -- common/autotest_common.sh@1532 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:34.553 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:34.812 Waiting for block devices as requested 00:05:34.812 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:34.812 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:34.812 20:05:23 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:05:34.812 20:05:23 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:34.812 20:05:23 -- common/autotest_common.sh@1498 -- # grep 0000:00:10.0/nvme/nvme 00:05:34.812 20:05:23 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:34.812 20:05:23 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:34.812 20:05:23 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:34.812 20:05:23 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:34.812 20:05:23 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme1 00:05:34.812 20:05:23 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme1 00:05:34.812 20:05:23 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme1 ]] 00:05:34.812 20:05:23 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme1 00:05:34.812 20:05:23 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:05:34.812 20:05:23 -- common/autotest_common.sh@1541 -- # grep oacs 00:05:34.812 20:05:23 -- common/autotest_common.sh@1541 -- # oacs=' 0x12a' 00:05:34.812 20:05:23 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:05:34.812 20:05:23 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:05:34.812 20:05:23 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme1 00:05:34.812 20:05:23 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:05:34.812 20:05:23 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:05:34.812 20:05:23 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:05:34.812 20:05:23 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:05:34.812 20:05:23 -- common/autotest_common.sh@1553 -- # continue 00:05:34.812 20:05:23 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:05:34.813 20:05:23 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:34.813 20:05:23 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:34.813 20:05:23 -- common/autotest_common.sh@1498 -- # grep 0000:00:11.0/nvme/nvme 00:05:34.813 20:05:23 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:34.813 20:05:23 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:34.813 20:05:23 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:35.070 20:05:23 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme0 00:05:35.070 20:05:23 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme0 00:05:35.070 20:05:23 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme0 ]] 00:05:35.070 20:05:23 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme0 00:05:35.070 20:05:23 -- common/autotest_common.sh@1541 -- # grep oacs 00:05:35.070 20:05:23 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:05:35.070 20:05:23 -- common/autotest_common.sh@1541 -- # oacs=' 0x12a' 00:05:35.070 20:05:23 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:05:35.070 20:05:23 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:05:35.070 20:05:23 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme0 00:05:35.070 20:05:23 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:05:35.070 20:05:23 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:05:35.070 20:05:23 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:05:35.070 20:05:23 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:05:35.070 20:05:23 -- common/autotest_common.sh@1553 -- # continue 00:05:35.070 20:05:23 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:35.070 20:05:23 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:35.070 20:05:23 -- common/autotest_common.sh@10 -- # set +x 00:05:35.070 20:05:23 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:35.070 20:05:23 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:35.070 20:05:23 -- common/autotest_common.sh@10 -- # set +x 00:05:35.070 20:05:23 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:35.636 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:35.636 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:35.895 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:35.895 20:05:24 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:35.895 20:05:24 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:35.895 20:05:24 -- common/autotest_common.sh@10 -- # set +x 00:05:35.895 20:05:24 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:35.895 20:05:24 -- common/autotest_common.sh@1587 -- # mapfile -t bdfs 00:05:35.895 20:05:24 -- common/autotest_common.sh@1587 -- # get_nvme_bdfs_by_id 0x0a54 00:05:35.895 20:05:24 -- common/autotest_common.sh@1573 -- # bdfs=() 00:05:35.895 20:05:24 -- common/autotest_common.sh@1573 -- # local bdfs 00:05:35.895 20:05:24 -- common/autotest_common.sh@1575 -- # get_nvme_bdfs 00:05:35.895 20:05:24 -- common/autotest_common.sh@1509 -- # bdfs=() 00:05:35.895 20:05:24 -- common/autotest_common.sh@1509 -- # local bdfs 00:05:35.895 20:05:24 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:35.895 20:05:24 -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:35.895 20:05:24 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:05:35.895 20:05:24 -- common/autotest_common.sh@1511 -- # (( 2 == 0 )) 00:05:35.895 20:05:24 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:35.895 20:05:24 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:05:35.895 20:05:24 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:35.895 20:05:24 -- common/autotest_common.sh@1576 -- # device=0x0010 00:05:35.895 20:05:24 -- common/autotest_common.sh@1577 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:35.895 20:05:24 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:05:35.895 20:05:24 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:35.895 20:05:24 -- common/autotest_common.sh@1576 -- # device=0x0010 00:05:35.895 20:05:24 -- common/autotest_common.sh@1577 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:35.895 20:05:24 -- common/autotest_common.sh@1582 -- # printf '%s\n' 00:05:35.895 20:05:24 -- common/autotest_common.sh@1588 -- # [[ -z '' ]] 00:05:35.895 20:05:24 -- common/autotest_common.sh@1589 -- # return 0 00:05:35.895 20:05:24 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:35.895 20:05:24 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:35.895 20:05:24 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:35.895 20:05:24 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:35.895 20:05:24 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:35.895 20:05:24 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:35.895 20:05:24 -- common/autotest_common.sh@10 -- # set +x 00:05:35.895 20:05:24 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:35.895 20:05:24 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:35.895 20:05:24 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:35.895 20:05:24 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:35.895 20:05:24 -- common/autotest_common.sh@10 -- # set +x 00:05:35.895 ************************************ 00:05:35.895 START TEST env 00:05:35.895 ************************************ 00:05:35.895 20:05:24 env -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:36.153 * Looking for test storage... 00:05:36.153 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:36.153 20:05:25 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:36.153 20:05:25 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:36.153 20:05:25 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:36.153 20:05:25 env -- common/autotest_common.sh@10 -- # set +x 00:05:36.153 ************************************ 00:05:36.153 START TEST env_memory 00:05:36.153 ************************************ 00:05:36.153 20:05:25 env.env_memory -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:36.153 00:05:36.153 00:05:36.153 CUnit - A unit testing framework for C - Version 2.1-3 00:05:36.153 http://cunit.sourceforge.net/ 00:05:36.153 00:05:36.153 00:05:36.153 Suite: memory 00:05:36.153 Test: alloc and free memory map ...[2024-07-14 20:05:25.094590] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:36.153 passed 00:05:36.153 Test: mem map translation ...[2024-07-14 20:05:25.126050] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:36.153 [2024-07-14 20:05:25.126292] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:36.153 [2024-07-14 20:05:25.126486] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:36.153 [2024-07-14 20:05:25.126633] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:36.153 passed 00:05:36.153 Test: mem map registration ...[2024-07-14 20:05:25.190950] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:36.153 [2024-07-14 20:05:25.191160] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:36.153 passed 00:05:36.412 Test: mem map adjacent registrations ...passed 00:05:36.412 00:05:36.412 Run Summary: Type Total Ran Passed Failed Inactive 00:05:36.412 suites 1 1 n/a 0 0 00:05:36.412 tests 4 4 4 0 0 00:05:36.412 asserts 152 152 152 0 n/a 00:05:36.412 00:05:36.412 Elapsed time = 0.217 seconds 00:05:36.412 00:05:36.412 real 0m0.240s 00:05:36.412 user 0m0.218s 00:05:36.412 sys 0m0.015s 00:05:36.412 20:05:25 env.env_memory -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:36.412 20:05:25 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:36.412 ************************************ 00:05:36.412 END TEST env_memory 00:05:36.412 ************************************ 00:05:36.412 20:05:25 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:36.412 20:05:25 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:36.412 20:05:25 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:36.412 20:05:25 env -- common/autotest_common.sh@10 -- # set +x 00:05:36.412 ************************************ 00:05:36.412 START TEST env_vtophys 00:05:36.412 ************************************ 00:05:36.412 20:05:25 env.env_vtophys -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:36.412 EAL: lib.eal log level changed from notice to debug 00:05:36.412 EAL: Detected lcore 0 as core 0 on socket 0 00:05:36.412 EAL: Detected lcore 1 as core 0 on socket 0 00:05:36.412 EAL: Detected lcore 2 as core 0 on socket 0 00:05:36.412 EAL: Detected lcore 3 as core 0 on socket 0 00:05:36.412 EAL: Detected lcore 4 as core 0 on socket 0 00:05:36.412 EAL: Detected lcore 5 as core 0 on socket 0 00:05:36.412 EAL: Detected lcore 6 as core 0 on socket 0 00:05:36.412 EAL: Detected lcore 7 as core 0 on socket 0 00:05:36.412 EAL: Detected lcore 8 as core 0 on socket 0 00:05:36.412 EAL: Detected lcore 9 as core 0 on socket 0 00:05:36.412 EAL: Maximum logical cores by configuration: 128 00:05:36.412 EAL: Detected CPU lcores: 10 00:05:36.412 EAL: Detected NUMA nodes: 1 00:05:36.412 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:36.412 EAL: Detected shared linkage of DPDK 00:05:36.412 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:05:36.412 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:05:36.412 EAL: Registered [vdev] bus. 00:05:36.412 EAL: bus.vdev log level changed from disabled to notice 00:05:36.412 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:05:36.412 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:05:36.412 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:36.412 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:36.412 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:05:36.412 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:05:36.412 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:05:36.412 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:05:36.412 EAL: No shared files mode enabled, IPC will be disabled 00:05:36.412 EAL: No shared files mode enabled, IPC is disabled 00:05:36.412 EAL: Selected IOVA mode 'PA' 00:05:36.412 EAL: Probing VFIO support... 00:05:36.412 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:36.412 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:36.412 EAL: Ask a virtual area of 0x2e000 bytes 00:05:36.412 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:36.412 EAL: Setting up physically contiguous memory... 00:05:36.412 EAL: Setting maximum number of open files to 524288 00:05:36.412 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:36.412 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:36.412 EAL: Ask a virtual area of 0x61000 bytes 00:05:36.412 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:36.412 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:36.412 EAL: Ask a virtual area of 0x400000000 bytes 00:05:36.412 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:36.412 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:36.412 EAL: Ask a virtual area of 0x61000 bytes 00:05:36.412 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:36.412 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:36.412 EAL: Ask a virtual area of 0x400000000 bytes 00:05:36.412 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:36.412 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:36.412 EAL: Ask a virtual area of 0x61000 bytes 00:05:36.412 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:36.412 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:36.412 EAL: Ask a virtual area of 0x400000000 bytes 00:05:36.412 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:36.412 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:36.412 EAL: Ask a virtual area of 0x61000 bytes 00:05:36.412 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:36.412 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:36.412 EAL: Ask a virtual area of 0x400000000 bytes 00:05:36.412 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:36.412 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:36.412 EAL: Hugepages will be freed exactly as allocated. 00:05:36.412 EAL: No shared files mode enabled, IPC is disabled 00:05:36.412 EAL: No shared files mode enabled, IPC is disabled 00:05:36.412 EAL: TSC frequency is ~2200000 KHz 00:05:36.412 EAL: Main lcore 0 is ready (tid=7fba4902ea00;cpuset=[0]) 00:05:36.412 EAL: Trying to obtain current memory policy. 00:05:36.412 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:36.412 EAL: Restoring previous memory policy: 0 00:05:36.412 EAL: request: mp_malloc_sync 00:05:36.412 EAL: No shared files mode enabled, IPC is disabled 00:05:36.412 EAL: Heap on socket 0 was expanded by 2MB 00:05:36.412 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:36.412 EAL: No shared files mode enabled, IPC is disabled 00:05:36.412 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:36.412 EAL: Mem event callback 'spdk:(nil)' registered 00:05:36.412 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:36.671 00:05:36.671 00:05:36.671 CUnit - A unit testing framework for C - Version 2.1-3 00:05:36.671 http://cunit.sourceforge.net/ 00:05:36.671 00:05:36.671 00:05:36.671 Suite: components_suite 00:05:36.671 Test: vtophys_malloc_test ...passed 00:05:36.671 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:36.671 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:36.671 EAL: Restoring previous memory policy: 4 00:05:36.671 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.671 EAL: request: mp_malloc_sync 00:05:36.671 EAL: No shared files mode enabled, IPC is disabled 00:05:36.671 EAL: Heap on socket 0 was expanded by 4MB 00:05:36.671 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.671 EAL: request: mp_malloc_sync 00:05:36.671 EAL: No shared files mode enabled, IPC is disabled 00:05:36.671 EAL: Heap on socket 0 was shrunk by 4MB 00:05:36.671 EAL: Trying to obtain current memory policy. 00:05:36.671 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:36.671 EAL: Restoring previous memory policy: 4 00:05:36.671 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.671 EAL: request: mp_malloc_sync 00:05:36.671 EAL: No shared files mode enabled, IPC is disabled 00:05:36.671 EAL: Heap on socket 0 was expanded by 6MB 00:05:36.672 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.672 EAL: request: mp_malloc_sync 00:05:36.672 EAL: No shared files mode enabled, IPC is disabled 00:05:36.672 EAL: Heap on socket 0 was shrunk by 6MB 00:05:36.672 EAL: Trying to obtain current memory policy. 00:05:36.672 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:36.672 EAL: Restoring previous memory policy: 4 00:05:36.672 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.672 EAL: request: mp_malloc_sync 00:05:36.672 EAL: No shared files mode enabled, IPC is disabled 00:05:36.672 EAL: Heap on socket 0 was expanded by 10MB 00:05:36.672 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.672 EAL: request: mp_malloc_sync 00:05:36.672 EAL: No shared files mode enabled, IPC is disabled 00:05:36.672 EAL: Heap on socket 0 was shrunk by 10MB 00:05:36.672 EAL: Trying to obtain current memory policy. 00:05:36.672 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:36.672 EAL: Restoring previous memory policy: 4 00:05:36.672 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.672 EAL: request: mp_malloc_sync 00:05:36.672 EAL: No shared files mode enabled, IPC is disabled 00:05:36.672 EAL: Heap on socket 0 was expanded by 18MB 00:05:36.672 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.672 EAL: request: mp_malloc_sync 00:05:36.672 EAL: No shared files mode enabled, IPC is disabled 00:05:36.672 EAL: Heap on socket 0 was shrunk by 18MB 00:05:36.672 EAL: Trying to obtain current memory policy. 00:05:36.672 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:36.672 EAL: Restoring previous memory policy: 4 00:05:36.672 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.672 EAL: request: mp_malloc_sync 00:05:36.672 EAL: No shared files mode enabled, IPC is disabled 00:05:36.672 EAL: Heap on socket 0 was expanded by 34MB 00:05:36.672 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.672 EAL: request: mp_malloc_sync 00:05:36.672 EAL: No shared files mode enabled, IPC is disabled 00:05:36.672 EAL: Heap on socket 0 was shrunk by 34MB 00:05:36.672 EAL: Trying to obtain current memory policy. 00:05:36.672 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:36.672 EAL: Restoring previous memory policy: 4 00:05:36.672 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.672 EAL: request: mp_malloc_sync 00:05:36.672 EAL: No shared files mode enabled, IPC is disabled 00:05:36.672 EAL: Heap on socket 0 was expanded by 66MB 00:05:36.672 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.672 EAL: request: mp_malloc_sync 00:05:36.672 EAL: No shared files mode enabled, IPC is disabled 00:05:36.672 EAL: Heap on socket 0 was shrunk by 66MB 00:05:36.672 EAL: Trying to obtain current memory policy. 00:05:36.672 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:36.672 EAL: Restoring previous memory policy: 4 00:05:36.672 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.672 EAL: request: mp_malloc_sync 00:05:36.672 EAL: No shared files mode enabled, IPC is disabled 00:05:36.672 EAL: Heap on socket 0 was expanded by 130MB 00:05:36.672 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.672 EAL: request: mp_malloc_sync 00:05:36.672 EAL: No shared files mode enabled, IPC is disabled 00:05:36.672 EAL: Heap on socket 0 was shrunk by 130MB 00:05:36.672 EAL: Trying to obtain current memory policy. 00:05:36.672 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:36.672 EAL: Restoring previous memory policy: 4 00:05:36.672 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.672 EAL: request: mp_malloc_sync 00:05:36.672 EAL: No shared files mode enabled, IPC is disabled 00:05:36.672 EAL: Heap on socket 0 was expanded by 258MB 00:05:36.931 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.931 EAL: request: mp_malloc_sync 00:05:36.931 EAL: No shared files mode enabled, IPC is disabled 00:05:36.931 EAL: Heap on socket 0 was shrunk by 258MB 00:05:36.931 EAL: Trying to obtain current memory policy. 00:05:36.931 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:36.931 EAL: Restoring previous memory policy: 4 00:05:36.931 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.931 EAL: request: mp_malloc_sync 00:05:36.931 EAL: No shared files mode enabled, IPC is disabled 00:05:36.931 EAL: Heap on socket 0 was expanded by 514MB 00:05:37.190 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.190 EAL: request: mp_malloc_sync 00:05:37.190 EAL: No shared files mode enabled, IPC is disabled 00:05:37.190 EAL: Heap on socket 0 was shrunk by 514MB 00:05:37.190 EAL: Trying to obtain current memory policy. 00:05:37.190 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.448 EAL: Restoring previous memory policy: 4 00:05:37.448 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.449 EAL: request: mp_malloc_sync 00:05:37.449 EAL: No shared files mode enabled, IPC is disabled 00:05:37.449 EAL: Heap on socket 0 was expanded by 1026MB 00:05:37.707 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.966 passed 00:05:37.966 00:05:37.966 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.966 suites 1 1 n/a 0 0 00:05:37.966 tests 2 2 2 0 0 00:05:37.966 asserts 5337 5337 5337 0 n/a 00:05:37.966 00:05:37.966 Elapsed time = 1.264 seconds 00:05:37.966 EAL: request: mp_malloc_sync 00:05:37.966 EAL: No shared files mode enabled, IPC is disabled 00:05:37.966 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:37.966 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.966 EAL: request: mp_malloc_sync 00:05:37.966 EAL: No shared files mode enabled, IPC is disabled 00:05:37.966 EAL: Heap on socket 0 was shrunk by 2MB 00:05:37.966 EAL: No shared files mode enabled, IPC is disabled 00:05:37.966 EAL: No shared files mode enabled, IPC is disabled 00:05:37.966 EAL: No shared files mode enabled, IPC is disabled 00:05:37.966 ************************************ 00:05:37.966 END TEST env_vtophys 00:05:37.966 ************************************ 00:05:37.966 00:05:37.966 real 0m1.467s 00:05:37.966 user 0m0.793s 00:05:37.966 sys 0m0.537s 00:05:37.966 20:05:26 env.env_vtophys -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:37.966 20:05:26 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:37.966 20:05:26 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:37.966 20:05:26 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:37.966 20:05:26 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:37.966 20:05:26 env -- common/autotest_common.sh@10 -- # set +x 00:05:37.966 ************************************ 00:05:37.966 START TEST env_pci 00:05:37.966 ************************************ 00:05:37.966 20:05:26 env.env_pci -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:37.966 00:05:37.966 00:05:37.966 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.966 http://cunit.sourceforge.net/ 00:05:37.966 00:05:37.966 00:05:37.966 Suite: pci 00:05:37.966 Test: pci_hook ...[2024-07-14 20:05:26.874620] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 72849 has claimed it 00:05:37.966 passed 00:05:37.966 00:05:37.966 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.966 suites 1 1 n/a 0 0 00:05:37.966 tests 1 1 1 0 0 00:05:37.966 asserts 25 25 25 0 n/a 00:05:37.966 00:05:37.967 Elapsed time = 0.002 seconds 00:05:37.967 EAL: Cannot find device (10000:00:01.0) 00:05:37.967 EAL: Failed to attach device on primary process 00:05:37.967 ************************************ 00:05:37.967 END TEST env_pci 00:05:37.967 ************************************ 00:05:37.967 00:05:37.967 real 0m0.022s 00:05:37.967 user 0m0.008s 00:05:37.967 sys 0m0.013s 00:05:37.967 20:05:26 env.env_pci -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:37.967 20:05:26 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:37.967 20:05:26 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:37.967 20:05:26 env -- env/env.sh@15 -- # uname 00:05:37.967 20:05:26 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:37.967 20:05:26 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:37.967 20:05:26 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:37.967 20:05:26 env -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:05:37.967 20:05:26 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:37.967 20:05:26 env -- common/autotest_common.sh@10 -- # set +x 00:05:37.967 ************************************ 00:05:37.967 START TEST env_dpdk_post_init 00:05:37.967 ************************************ 00:05:37.967 20:05:26 env.env_dpdk_post_init -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:37.967 EAL: Detected CPU lcores: 10 00:05:37.967 EAL: Detected NUMA nodes: 1 00:05:37.967 EAL: Detected shared linkage of DPDK 00:05:37.967 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:37.967 EAL: Selected IOVA mode 'PA' 00:05:38.225 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:38.225 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:38.225 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:38.225 Starting DPDK initialization... 00:05:38.225 Starting SPDK post initialization... 00:05:38.225 SPDK NVMe probe 00:05:38.225 Attaching to 0000:00:10.0 00:05:38.225 Attaching to 0000:00:11.0 00:05:38.225 Attached to 0000:00:10.0 00:05:38.225 Attached to 0000:00:11.0 00:05:38.225 Cleaning up... 00:05:38.225 00:05:38.225 real 0m0.178s 00:05:38.225 user 0m0.045s 00:05:38.225 sys 0m0.033s 00:05:38.226 20:05:27 env.env_dpdk_post_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:38.226 20:05:27 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:38.226 ************************************ 00:05:38.226 END TEST env_dpdk_post_init 00:05:38.226 ************************************ 00:05:38.226 20:05:27 env -- env/env.sh@26 -- # uname 00:05:38.226 20:05:27 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:38.226 20:05:27 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:38.226 20:05:27 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:38.226 20:05:27 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:38.226 20:05:27 env -- common/autotest_common.sh@10 -- # set +x 00:05:38.226 ************************************ 00:05:38.226 START TEST env_mem_callbacks 00:05:38.226 ************************************ 00:05:38.226 20:05:27 env.env_mem_callbacks -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:38.226 EAL: Detected CPU lcores: 10 00:05:38.226 EAL: Detected NUMA nodes: 1 00:05:38.226 EAL: Detected shared linkage of DPDK 00:05:38.226 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:38.226 EAL: Selected IOVA mode 'PA' 00:05:38.226 00:05:38.226 00:05:38.226 CUnit - A unit testing framework for C - Version 2.1-3 00:05:38.226 http://cunit.sourceforge.net/ 00:05:38.226 00:05:38.226 00:05:38.226 Suite: memory 00:05:38.226 Test: test ... 00:05:38.226 register 0x200000200000 2097152 00:05:38.226 malloc 3145728 00:05:38.226 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:38.226 register 0x200000400000 4194304 00:05:38.226 buf 0x200000500000 len 3145728 PASSED 00:05:38.226 malloc 64 00:05:38.226 buf 0x2000004fff40 len 64 PASSED 00:05:38.226 malloc 4194304 00:05:38.485 register 0x200000800000 6291456 00:05:38.485 buf 0x200000a00000 len 4194304 PASSED 00:05:38.485 free 0x200000500000 3145728 00:05:38.485 free 0x2000004fff40 64 00:05:38.485 unregister 0x200000400000 4194304 PASSED 00:05:38.485 free 0x200000a00000 4194304 00:05:38.485 unregister 0x200000800000 6291456 PASSED 00:05:38.485 malloc 8388608 00:05:38.485 register 0x200000400000 10485760 00:05:38.485 buf 0x200000600000 len 8388608 PASSED 00:05:38.485 free 0x200000600000 8388608 00:05:38.485 unregister 0x200000400000 10485760 PASSED 00:05:38.485 passed 00:05:38.485 00:05:38.485 Run Summary: Type Total Ran Passed Failed Inactive 00:05:38.485 suites 1 1 n/a 0 0 00:05:38.485 tests 1 1 1 0 0 00:05:38.485 asserts 15 15 15 0 n/a 00:05:38.485 00:05:38.485 Elapsed time = 0.007 seconds 00:05:38.485 00:05:38.485 real 0m0.140s 00:05:38.485 user 0m0.014s 00:05:38.485 sys 0m0.024s 00:05:38.485 20:05:27 env.env_mem_callbacks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:38.485 20:05:27 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:38.485 ************************************ 00:05:38.485 END TEST env_mem_callbacks 00:05:38.485 ************************************ 00:05:38.485 ************************************ 00:05:38.485 END TEST env 00:05:38.485 ************************************ 00:05:38.485 00:05:38.485 real 0m2.409s 00:05:38.485 user 0m1.191s 00:05:38.485 sys 0m0.839s 00:05:38.485 20:05:27 env -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:38.485 20:05:27 env -- common/autotest_common.sh@10 -- # set +x 00:05:38.485 20:05:27 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:38.485 20:05:27 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:38.485 20:05:27 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:38.485 20:05:27 -- common/autotest_common.sh@10 -- # set +x 00:05:38.485 ************************************ 00:05:38.485 START TEST rpc 00:05:38.485 ************************************ 00:05:38.485 20:05:27 rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:38.485 * Looking for test storage... 00:05:38.485 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:38.485 20:05:27 rpc -- rpc/rpc.sh@65 -- # spdk_pid=72959 00:05:38.485 20:05:27 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:38.485 20:05:27 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:38.485 20:05:27 rpc -- rpc/rpc.sh@67 -- # waitforlisten 72959 00:05:38.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.485 20:05:27 rpc -- common/autotest_common.sh@827 -- # '[' -z 72959 ']' 00:05:38.486 20:05:27 rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.486 20:05:27 rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:38.486 20:05:27 rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.486 20:05:27 rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:38.486 20:05:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.745 [2024-07-14 20:05:27.572666] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:05:38.745 [2024-07-14 20:05:27.573080] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72959 ] 00:05:38.745 [2024-07-14 20:05:27.709814] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.745 [2024-07-14 20:05:27.805214] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:38.745 [2024-07-14 20:05:27.805260] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 72959' to capture a snapshot of events at runtime. 00:05:38.745 [2024-07-14 20:05:27.805272] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:38.745 [2024-07-14 20:05:27.805281] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:38.745 [2024-07-14 20:05:27.805288] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid72959 for offline analysis/debug. 00:05:38.745 [2024-07-14 20:05:27.805313] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.679 20:05:28 rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:39.679 20:05:28 rpc -- common/autotest_common.sh@860 -- # return 0 00:05:39.679 20:05:28 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:39.679 20:05:28 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:39.679 20:05:28 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:39.679 20:05:28 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:39.679 20:05:28 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:39.679 20:05:28 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:39.679 20:05:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.679 ************************************ 00:05:39.679 START TEST rpc_integrity 00:05:39.679 ************************************ 00:05:39.679 20:05:28 rpc.rpc_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:05:39.679 20:05:28 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:39.679 20:05:28 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:39.679 20:05:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.679 20:05:28 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:39.679 20:05:28 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:39.679 20:05:28 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:39.679 20:05:28 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:39.679 20:05:28 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:39.679 20:05:28 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:39.679 20:05:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.679 20:05:28 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:39.679 20:05:28 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:39.679 20:05:28 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:39.679 20:05:28 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:39.679 20:05:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.679 20:05:28 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:39.679 20:05:28 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:39.679 { 00:05:39.679 "aliases": [ 00:05:39.679 "425174bc-a86d-410e-9c60-770856985cf0" 00:05:39.679 ], 00:05:39.679 "assigned_rate_limits": { 00:05:39.679 "r_mbytes_per_sec": 0, 00:05:39.679 "rw_ios_per_sec": 0, 00:05:39.679 "rw_mbytes_per_sec": 0, 00:05:39.679 "w_mbytes_per_sec": 0 00:05:39.679 }, 00:05:39.679 "block_size": 512, 00:05:39.679 "claimed": false, 00:05:39.679 "driver_specific": {}, 00:05:39.679 "memory_domains": [ 00:05:39.679 { 00:05:39.679 "dma_device_id": "system", 00:05:39.679 "dma_device_type": 1 00:05:39.679 }, 00:05:39.679 { 00:05:39.679 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:39.679 "dma_device_type": 2 00:05:39.679 } 00:05:39.679 ], 00:05:39.679 "name": "Malloc0", 00:05:39.679 "num_blocks": 16384, 00:05:39.679 "product_name": "Malloc disk", 00:05:39.679 "supported_io_types": { 00:05:39.679 "abort": true, 00:05:39.679 "compare": false, 00:05:39.679 "compare_and_write": false, 00:05:39.679 "flush": true, 00:05:39.679 "nvme_admin": false, 00:05:39.679 "nvme_io": false, 00:05:39.679 "read": true, 00:05:39.679 "reset": true, 00:05:39.679 "unmap": true, 00:05:39.679 "write": true, 00:05:39.679 "write_zeroes": true 00:05:39.679 }, 00:05:39.679 "uuid": "425174bc-a86d-410e-9c60-770856985cf0", 00:05:39.679 "zoned": false 00:05:39.679 } 00:05:39.679 ]' 00:05:39.679 20:05:28 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:39.679 20:05:28 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:39.679 20:05:28 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:39.679 20:05:28 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:39.679 20:05:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.679 [2024-07-14 20:05:28.695659] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:39.679 [2024-07-14 20:05:28.695711] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:39.679 [2024-07-14 20:05:28.695730] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1cdfe90 00:05:39.679 [2024-07-14 20:05:28.695739] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:39.679 [2024-07-14 20:05:28.697520] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:39.679 [2024-07-14 20:05:28.697553] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:39.679 Passthru0 00:05:39.679 20:05:28 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:39.679 20:05:28 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:39.679 20:05:28 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:39.679 20:05:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.679 20:05:28 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:39.679 20:05:28 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:39.679 { 00:05:39.679 "aliases": [ 00:05:39.679 "425174bc-a86d-410e-9c60-770856985cf0" 00:05:39.679 ], 00:05:39.679 "assigned_rate_limits": { 00:05:39.679 "r_mbytes_per_sec": 0, 00:05:39.679 "rw_ios_per_sec": 0, 00:05:39.679 "rw_mbytes_per_sec": 0, 00:05:39.679 "w_mbytes_per_sec": 0 00:05:39.679 }, 00:05:39.679 "block_size": 512, 00:05:39.679 "claim_type": "exclusive_write", 00:05:39.680 "claimed": true, 00:05:39.680 "driver_specific": {}, 00:05:39.680 "memory_domains": [ 00:05:39.680 { 00:05:39.680 "dma_device_id": "system", 00:05:39.680 "dma_device_type": 1 00:05:39.680 }, 00:05:39.680 { 00:05:39.680 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:39.680 "dma_device_type": 2 00:05:39.680 } 00:05:39.680 ], 00:05:39.680 "name": "Malloc0", 00:05:39.680 "num_blocks": 16384, 00:05:39.680 "product_name": "Malloc disk", 00:05:39.680 "supported_io_types": { 00:05:39.680 "abort": true, 00:05:39.680 "compare": false, 00:05:39.680 "compare_and_write": false, 00:05:39.680 "flush": true, 00:05:39.680 "nvme_admin": false, 00:05:39.680 "nvme_io": false, 00:05:39.680 "read": true, 00:05:39.680 "reset": true, 00:05:39.680 "unmap": true, 00:05:39.680 "write": true, 00:05:39.680 "write_zeroes": true 00:05:39.680 }, 00:05:39.680 "uuid": "425174bc-a86d-410e-9c60-770856985cf0", 00:05:39.680 "zoned": false 00:05:39.680 }, 00:05:39.680 { 00:05:39.680 "aliases": [ 00:05:39.680 "93dfe3cc-f7e1-57a2-84fc-2c8549d40f82" 00:05:39.680 ], 00:05:39.680 "assigned_rate_limits": { 00:05:39.680 "r_mbytes_per_sec": 0, 00:05:39.680 "rw_ios_per_sec": 0, 00:05:39.680 "rw_mbytes_per_sec": 0, 00:05:39.680 "w_mbytes_per_sec": 0 00:05:39.680 }, 00:05:39.680 "block_size": 512, 00:05:39.680 "claimed": false, 00:05:39.680 "driver_specific": { 00:05:39.680 "passthru": { 00:05:39.680 "base_bdev_name": "Malloc0", 00:05:39.680 "name": "Passthru0" 00:05:39.680 } 00:05:39.680 }, 00:05:39.680 "memory_domains": [ 00:05:39.680 { 00:05:39.680 "dma_device_id": "system", 00:05:39.680 "dma_device_type": 1 00:05:39.680 }, 00:05:39.680 { 00:05:39.680 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:39.680 "dma_device_type": 2 00:05:39.680 } 00:05:39.680 ], 00:05:39.680 "name": "Passthru0", 00:05:39.680 "num_blocks": 16384, 00:05:39.680 "product_name": "passthru", 00:05:39.680 "supported_io_types": { 00:05:39.680 "abort": true, 00:05:39.680 "compare": false, 00:05:39.680 "compare_and_write": false, 00:05:39.680 "flush": true, 00:05:39.680 "nvme_admin": false, 00:05:39.680 "nvme_io": false, 00:05:39.680 "read": true, 00:05:39.680 "reset": true, 00:05:39.680 "unmap": true, 00:05:39.680 "write": true, 00:05:39.680 "write_zeroes": true 00:05:39.680 }, 00:05:39.680 "uuid": "93dfe3cc-f7e1-57a2-84fc-2c8549d40f82", 00:05:39.680 "zoned": false 00:05:39.680 } 00:05:39.680 ]' 00:05:39.680 20:05:28 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:39.938 20:05:28 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:39.938 20:05:28 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:39.938 20:05:28 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:39.938 20:05:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.938 20:05:28 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:39.938 20:05:28 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:39.938 20:05:28 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:39.938 20:05:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.938 20:05:28 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:39.938 20:05:28 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:39.938 20:05:28 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:39.938 20:05:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.938 20:05:28 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:39.938 20:05:28 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:39.938 20:05:28 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:39.938 ************************************ 00:05:39.938 END TEST rpc_integrity 00:05:39.938 ************************************ 00:05:39.938 20:05:28 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:39.938 00:05:39.938 real 0m0.323s 00:05:39.938 user 0m0.201s 00:05:39.938 sys 0m0.044s 00:05:39.938 20:05:28 rpc.rpc_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:39.938 20:05:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.938 20:05:28 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:39.938 20:05:28 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:39.938 20:05:28 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:39.938 20:05:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.938 ************************************ 00:05:39.938 START TEST rpc_plugins 00:05:39.938 ************************************ 00:05:39.938 20:05:28 rpc.rpc_plugins -- common/autotest_common.sh@1121 -- # rpc_plugins 00:05:39.938 20:05:28 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:39.938 20:05:28 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:39.938 20:05:28 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:39.938 20:05:28 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:39.938 20:05:28 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:39.938 20:05:28 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:39.938 20:05:28 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:39.938 20:05:28 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:39.938 20:05:28 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:39.938 20:05:28 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:39.938 { 00:05:39.938 "aliases": [ 00:05:39.938 "10ae275f-c38f-4f66-8ac1-6e3ecb600fb0" 00:05:39.938 ], 00:05:39.938 "assigned_rate_limits": { 00:05:39.938 "r_mbytes_per_sec": 0, 00:05:39.938 "rw_ios_per_sec": 0, 00:05:39.938 "rw_mbytes_per_sec": 0, 00:05:39.938 "w_mbytes_per_sec": 0 00:05:39.938 }, 00:05:39.938 "block_size": 4096, 00:05:39.938 "claimed": false, 00:05:39.938 "driver_specific": {}, 00:05:39.938 "memory_domains": [ 00:05:39.938 { 00:05:39.938 "dma_device_id": "system", 00:05:39.938 "dma_device_type": 1 00:05:39.938 }, 00:05:39.938 { 00:05:39.938 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:39.938 "dma_device_type": 2 00:05:39.938 } 00:05:39.938 ], 00:05:39.938 "name": "Malloc1", 00:05:39.938 "num_blocks": 256, 00:05:39.938 "product_name": "Malloc disk", 00:05:39.938 "supported_io_types": { 00:05:39.938 "abort": true, 00:05:39.938 "compare": false, 00:05:39.938 "compare_and_write": false, 00:05:39.938 "flush": true, 00:05:39.938 "nvme_admin": false, 00:05:39.938 "nvme_io": false, 00:05:39.938 "read": true, 00:05:39.938 "reset": true, 00:05:39.938 "unmap": true, 00:05:39.938 "write": true, 00:05:39.938 "write_zeroes": true 00:05:39.938 }, 00:05:39.938 "uuid": "10ae275f-c38f-4f66-8ac1-6e3ecb600fb0", 00:05:39.938 "zoned": false 00:05:39.938 } 00:05:39.938 ]' 00:05:39.938 20:05:28 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:39.938 20:05:28 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:39.938 20:05:28 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:39.938 20:05:28 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:39.938 20:05:28 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:39.938 20:05:29 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:39.938 20:05:29 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:39.938 20:05:29 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:39.938 20:05:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:39.938 20:05:29 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:39.938 20:05:29 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:39.938 20:05:29 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:40.197 ************************************ 00:05:40.197 END TEST rpc_plugins 00:05:40.197 ************************************ 00:05:40.197 20:05:29 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:40.197 00:05:40.197 real 0m0.161s 00:05:40.197 user 0m0.106s 00:05:40.197 sys 0m0.018s 00:05:40.197 20:05:29 rpc.rpc_plugins -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:40.197 20:05:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:40.197 20:05:29 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:40.197 20:05:29 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:40.197 20:05:29 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:40.197 20:05:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.197 ************************************ 00:05:40.197 START TEST rpc_trace_cmd_test 00:05:40.197 ************************************ 00:05:40.197 20:05:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1121 -- # rpc_trace_cmd_test 00:05:40.197 20:05:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:40.197 20:05:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:40.197 20:05:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:40.197 20:05:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:40.197 20:05:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:40.197 20:05:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:40.197 "bdev": { 00:05:40.197 "mask": "0x8", 00:05:40.197 "tpoint_mask": "0xffffffffffffffff" 00:05:40.197 }, 00:05:40.197 "bdev_nvme": { 00:05:40.197 "mask": "0x4000", 00:05:40.197 "tpoint_mask": "0x0" 00:05:40.197 }, 00:05:40.197 "blobfs": { 00:05:40.197 "mask": "0x80", 00:05:40.197 "tpoint_mask": "0x0" 00:05:40.197 }, 00:05:40.197 "dsa": { 00:05:40.197 "mask": "0x200", 00:05:40.197 "tpoint_mask": "0x0" 00:05:40.197 }, 00:05:40.197 "ftl": { 00:05:40.197 "mask": "0x40", 00:05:40.197 "tpoint_mask": "0x0" 00:05:40.197 }, 00:05:40.197 "iaa": { 00:05:40.197 "mask": "0x1000", 00:05:40.197 "tpoint_mask": "0x0" 00:05:40.197 }, 00:05:40.197 "iscsi_conn": { 00:05:40.197 "mask": "0x2", 00:05:40.197 "tpoint_mask": "0x0" 00:05:40.197 }, 00:05:40.197 "nvme_pcie": { 00:05:40.197 "mask": "0x800", 00:05:40.197 "tpoint_mask": "0x0" 00:05:40.197 }, 00:05:40.197 "nvme_tcp": { 00:05:40.197 "mask": "0x2000", 00:05:40.197 "tpoint_mask": "0x0" 00:05:40.197 }, 00:05:40.197 "nvmf_rdma": { 00:05:40.197 "mask": "0x10", 00:05:40.197 "tpoint_mask": "0x0" 00:05:40.197 }, 00:05:40.197 "nvmf_tcp": { 00:05:40.197 "mask": "0x20", 00:05:40.197 "tpoint_mask": "0x0" 00:05:40.197 }, 00:05:40.197 "scsi": { 00:05:40.197 "mask": "0x4", 00:05:40.197 "tpoint_mask": "0x0" 00:05:40.197 }, 00:05:40.197 "sock": { 00:05:40.197 "mask": "0x8000", 00:05:40.197 "tpoint_mask": "0x0" 00:05:40.197 }, 00:05:40.197 "thread": { 00:05:40.197 "mask": "0x400", 00:05:40.197 "tpoint_mask": "0x0" 00:05:40.197 }, 00:05:40.197 "tpoint_group_mask": "0x8", 00:05:40.197 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid72959" 00:05:40.197 }' 00:05:40.197 20:05:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:40.197 20:05:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:40.197 20:05:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:40.197 20:05:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:40.197 20:05:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:40.457 20:05:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:40.457 20:05:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:40.457 20:05:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:40.457 20:05:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:40.457 ************************************ 00:05:40.457 END TEST rpc_trace_cmd_test 00:05:40.457 ************************************ 00:05:40.457 20:05:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:40.457 00:05:40.457 real 0m0.270s 00:05:40.457 user 0m0.234s 00:05:40.457 sys 0m0.025s 00:05:40.457 20:05:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:40.457 20:05:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:40.457 20:05:29 rpc -- rpc/rpc.sh@76 -- # [[ 1 -eq 1 ]] 00:05:40.457 20:05:29 rpc -- rpc/rpc.sh@77 -- # run_test go_rpc go_rpc 00:05:40.457 20:05:29 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:40.457 20:05:29 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:40.457 20:05:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.457 ************************************ 00:05:40.457 START TEST go_rpc 00:05:40.457 ************************************ 00:05:40.457 20:05:29 rpc.go_rpc -- common/autotest_common.sh@1121 -- # go_rpc 00:05:40.457 20:05:29 rpc.go_rpc -- rpc/rpc.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:05:40.457 20:05:29 rpc.go_rpc -- rpc/rpc.sh@51 -- # bdevs='[]' 00:05:40.457 20:05:29 rpc.go_rpc -- rpc/rpc.sh@52 -- # jq length 00:05:40.457 20:05:29 rpc.go_rpc -- rpc/rpc.sh@52 -- # '[' 0 == 0 ']' 00:05:40.457 20:05:29 rpc.go_rpc -- rpc/rpc.sh@54 -- # rpc_cmd bdev_malloc_create 8 512 00:05:40.457 20:05:29 rpc.go_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:40.457 20:05:29 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.457 20:05:29 rpc.go_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:40.457 20:05:29 rpc.go_rpc -- rpc/rpc.sh@54 -- # malloc=Malloc2 00:05:40.457 20:05:29 rpc.go_rpc -- rpc/rpc.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:05:40.716 20:05:29 rpc.go_rpc -- rpc/rpc.sh@56 -- # bdevs='[{"aliases":["cd0e4f97-8933-45db-bad1-7a8e03b64941"],"assigned_rate_limits":{"r_mbytes_per_sec":0,"rw_ios_per_sec":0,"rw_mbytes_per_sec":0,"w_mbytes_per_sec":0},"block_size":512,"claimed":false,"driver_specific":{},"memory_domains":[{"dma_device_id":"system","dma_device_type":1},{"dma_device_id":"SPDK_ACCEL_DMA_DEVICE","dma_device_type":2}],"name":"Malloc2","num_blocks":16384,"product_name":"Malloc disk","supported_io_types":{"abort":true,"compare":false,"compare_and_write":false,"flush":true,"nvme_admin":false,"nvme_io":false,"read":true,"reset":true,"unmap":true,"write":true,"write_zeroes":true},"uuid":"cd0e4f97-8933-45db-bad1-7a8e03b64941","zoned":false}]' 00:05:40.716 20:05:29 rpc.go_rpc -- rpc/rpc.sh@57 -- # jq length 00:05:40.716 20:05:29 rpc.go_rpc -- rpc/rpc.sh@57 -- # '[' 1 == 1 ']' 00:05:40.716 20:05:29 rpc.go_rpc -- rpc/rpc.sh@59 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:40.716 20:05:29 rpc.go_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:40.716 20:05:29 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.716 20:05:29 rpc.go_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:40.716 20:05:29 rpc.go_rpc -- rpc/rpc.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:05:40.716 20:05:29 rpc.go_rpc -- rpc/rpc.sh@60 -- # bdevs='[]' 00:05:40.716 20:05:29 rpc.go_rpc -- rpc/rpc.sh@61 -- # jq length 00:05:40.716 ************************************ 00:05:40.716 END TEST go_rpc 00:05:40.716 ************************************ 00:05:40.716 20:05:29 rpc.go_rpc -- rpc/rpc.sh@61 -- # '[' 0 == 0 ']' 00:05:40.716 00:05:40.716 real 0m0.230s 00:05:40.716 user 0m0.153s 00:05:40.716 sys 0m0.042s 00:05:40.716 20:05:29 rpc.go_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:40.716 20:05:29 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.716 20:05:29 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:40.716 20:05:29 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:40.716 20:05:29 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:40.716 20:05:29 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:40.716 20:05:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.716 ************************************ 00:05:40.716 START TEST rpc_daemon_integrity 00:05:40.716 ************************************ 00:05:40.716 20:05:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:05:40.716 20:05:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:40.716 20:05:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:40.716 20:05:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:40.716 20:05:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:40.716 20:05:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:40.716 20:05:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:40.716 20:05:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:40.716 20:05:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:40.716 20:05:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:40.716 20:05:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:40.975 20:05:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:40.975 20:05:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc3 00:05:40.975 20:05:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:40.975 20:05:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:40.975 20:05:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:40.975 20:05:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:40.975 20:05:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:40.975 { 00:05:40.975 "aliases": [ 00:05:40.975 "10735484-637e-40b4-859b-7f57a3252f3c" 00:05:40.975 ], 00:05:40.975 "assigned_rate_limits": { 00:05:40.975 "r_mbytes_per_sec": 0, 00:05:40.975 "rw_ios_per_sec": 0, 00:05:40.975 "rw_mbytes_per_sec": 0, 00:05:40.975 "w_mbytes_per_sec": 0 00:05:40.975 }, 00:05:40.975 "block_size": 512, 00:05:40.975 "claimed": false, 00:05:40.975 "driver_specific": {}, 00:05:40.975 "memory_domains": [ 00:05:40.975 { 00:05:40.975 "dma_device_id": "system", 00:05:40.975 "dma_device_type": 1 00:05:40.975 }, 00:05:40.975 { 00:05:40.975 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:40.975 "dma_device_type": 2 00:05:40.975 } 00:05:40.975 ], 00:05:40.975 "name": "Malloc3", 00:05:40.975 "num_blocks": 16384, 00:05:40.975 "product_name": "Malloc disk", 00:05:40.975 "supported_io_types": { 00:05:40.975 "abort": true, 00:05:40.975 "compare": false, 00:05:40.975 "compare_and_write": false, 00:05:40.975 "flush": true, 00:05:40.975 "nvme_admin": false, 00:05:40.975 "nvme_io": false, 00:05:40.975 "read": true, 00:05:40.975 "reset": true, 00:05:40.975 "unmap": true, 00:05:40.975 "write": true, 00:05:40.975 "write_zeroes": true 00:05:40.975 }, 00:05:40.975 "uuid": "10735484-637e-40b4-859b-7f57a3252f3c", 00:05:40.975 "zoned": false 00:05:40.975 } 00:05:40.975 ]' 00:05:40.975 20:05:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:40.975 20:05:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:40.975 20:05:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc3 -p Passthru0 00:05:40.975 20:05:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:40.975 20:05:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:40.975 [2024-07-14 20:05:29.889252] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:40.975 [2024-07-14 20:05:29.889322] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:40.975 [2024-07-14 20:05:29.889342] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1e83ac0 00:05:40.975 [2024-07-14 20:05:29.889352] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:40.975 [2024-07-14 20:05:29.891054] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:40.975 [2024-07-14 20:05:29.891091] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:40.975 Passthru0 00:05:40.975 20:05:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:40.975 20:05:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:40.975 20:05:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:40.975 20:05:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:40.975 20:05:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:40.975 20:05:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:40.975 { 00:05:40.975 "aliases": [ 00:05:40.975 "10735484-637e-40b4-859b-7f57a3252f3c" 00:05:40.975 ], 00:05:40.975 "assigned_rate_limits": { 00:05:40.975 "r_mbytes_per_sec": 0, 00:05:40.975 "rw_ios_per_sec": 0, 00:05:40.975 "rw_mbytes_per_sec": 0, 00:05:40.975 "w_mbytes_per_sec": 0 00:05:40.975 }, 00:05:40.975 "block_size": 512, 00:05:40.975 "claim_type": "exclusive_write", 00:05:40.975 "claimed": true, 00:05:40.975 "driver_specific": {}, 00:05:40.975 "memory_domains": [ 00:05:40.975 { 00:05:40.975 "dma_device_id": "system", 00:05:40.975 "dma_device_type": 1 00:05:40.975 }, 00:05:40.975 { 00:05:40.975 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:40.975 "dma_device_type": 2 00:05:40.975 } 00:05:40.975 ], 00:05:40.976 "name": "Malloc3", 00:05:40.976 "num_blocks": 16384, 00:05:40.976 "product_name": "Malloc disk", 00:05:40.976 "supported_io_types": { 00:05:40.976 "abort": true, 00:05:40.976 "compare": false, 00:05:40.976 "compare_and_write": false, 00:05:40.976 "flush": true, 00:05:40.976 "nvme_admin": false, 00:05:40.976 "nvme_io": false, 00:05:40.976 "read": true, 00:05:40.976 "reset": true, 00:05:40.976 "unmap": true, 00:05:40.976 "write": true, 00:05:40.976 "write_zeroes": true 00:05:40.976 }, 00:05:40.976 "uuid": "10735484-637e-40b4-859b-7f57a3252f3c", 00:05:40.976 "zoned": false 00:05:40.976 }, 00:05:40.976 { 00:05:40.976 "aliases": [ 00:05:40.976 "5224e5a2-e038-5ed2-b5de-9c5a96f94db0" 00:05:40.976 ], 00:05:40.976 "assigned_rate_limits": { 00:05:40.976 "r_mbytes_per_sec": 0, 00:05:40.976 "rw_ios_per_sec": 0, 00:05:40.976 "rw_mbytes_per_sec": 0, 00:05:40.976 "w_mbytes_per_sec": 0 00:05:40.976 }, 00:05:40.976 "block_size": 512, 00:05:40.976 "claimed": false, 00:05:40.976 "driver_specific": { 00:05:40.976 "passthru": { 00:05:40.976 "base_bdev_name": "Malloc3", 00:05:40.976 "name": "Passthru0" 00:05:40.976 } 00:05:40.976 }, 00:05:40.976 "memory_domains": [ 00:05:40.976 { 00:05:40.976 "dma_device_id": "system", 00:05:40.976 "dma_device_type": 1 00:05:40.976 }, 00:05:40.976 { 00:05:40.976 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:40.976 "dma_device_type": 2 00:05:40.976 } 00:05:40.976 ], 00:05:40.976 "name": "Passthru0", 00:05:40.976 "num_blocks": 16384, 00:05:40.976 "product_name": "passthru", 00:05:40.976 "supported_io_types": { 00:05:40.976 "abort": true, 00:05:40.976 "compare": false, 00:05:40.976 "compare_and_write": false, 00:05:40.976 "flush": true, 00:05:40.976 "nvme_admin": false, 00:05:40.976 "nvme_io": false, 00:05:40.976 "read": true, 00:05:40.976 "reset": true, 00:05:40.976 "unmap": true, 00:05:40.976 "write": true, 00:05:40.976 "write_zeroes": true 00:05:40.976 }, 00:05:40.976 "uuid": "5224e5a2-e038-5ed2-b5de-9c5a96f94db0", 00:05:40.976 "zoned": false 00:05:40.976 } 00:05:40.976 ]' 00:05:40.976 20:05:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:40.976 20:05:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:40.976 20:05:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:40.976 20:05:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:40.976 20:05:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:40.976 20:05:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:40.976 20:05:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc3 00:05:40.976 20:05:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:40.976 20:05:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:40.976 20:05:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:40.976 20:05:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:40.976 20:05:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:40.976 20:05:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:40.976 20:05:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:40.976 20:05:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:40.976 20:05:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:40.976 ************************************ 00:05:40.976 END TEST rpc_daemon_integrity 00:05:40.976 ************************************ 00:05:40.976 20:05:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:40.976 00:05:40.976 real 0m0.327s 00:05:40.976 user 0m0.219s 00:05:40.976 sys 0m0.040s 00:05:40.976 20:05:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:40.976 20:05:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.234 20:05:30 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:41.234 20:05:30 rpc -- rpc/rpc.sh@84 -- # killprocess 72959 00:05:41.234 20:05:30 rpc -- common/autotest_common.sh@946 -- # '[' -z 72959 ']' 00:05:41.234 20:05:30 rpc -- common/autotest_common.sh@950 -- # kill -0 72959 00:05:41.234 20:05:30 rpc -- common/autotest_common.sh@951 -- # uname 00:05:41.234 20:05:30 rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:41.234 20:05:30 rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 72959 00:05:41.234 killing process with pid 72959 00:05:41.234 20:05:30 rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:41.234 20:05:30 rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:41.234 20:05:30 rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 72959' 00:05:41.234 20:05:30 rpc -- common/autotest_common.sh@965 -- # kill 72959 00:05:41.234 20:05:30 rpc -- common/autotest_common.sh@970 -- # wait 72959 00:05:41.497 00:05:41.497 real 0m3.084s 00:05:41.497 user 0m4.040s 00:05:41.497 sys 0m0.766s 00:05:41.497 20:05:30 rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:41.497 20:05:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.497 ************************************ 00:05:41.497 END TEST rpc 00:05:41.497 ************************************ 00:05:41.497 20:05:30 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:41.497 20:05:30 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:41.497 20:05:30 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:41.497 20:05:30 -- common/autotest_common.sh@10 -- # set +x 00:05:41.497 ************************************ 00:05:41.497 START TEST skip_rpc 00:05:41.497 ************************************ 00:05:41.497 20:05:30 skip_rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:41.755 * Looking for test storage... 00:05:41.755 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:41.755 20:05:30 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:41.755 20:05:30 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:41.755 20:05:30 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:41.755 20:05:30 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:41.756 20:05:30 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:41.756 20:05:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.756 ************************************ 00:05:41.756 START TEST skip_rpc 00:05:41.756 ************************************ 00:05:41.756 20:05:30 skip_rpc.skip_rpc -- common/autotest_common.sh@1121 -- # test_skip_rpc 00:05:41.756 20:05:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=73220 00:05:41.756 20:05:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:41.756 20:05:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:41.756 20:05:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:41.756 [2024-07-14 20:05:30.717697] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:05:41.756 [2024-07-14 20:05:30.717843] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73220 ] 00:05:42.014 [2024-07-14 20:05:30.858311] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.014 [2024-07-14 20:05:30.955321] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.297 20:05:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:47.297 20:05:35 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:47.297 20:05:35 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:47.297 20:05:35 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:47.297 20:05:35 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:47.297 20:05:35 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:47.297 20:05:35 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:47.297 20:05:35 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:47.297 20:05:35 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.297 20:05:35 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.298 2024/07/14 20:05:35 error on client creation, err: error during client creation for Unix socket, err: could not connect to a Unix socket on address /var/tmp/spdk.sock, err: dial unix /var/tmp/spdk.sock: connect: no such file or directory 00:05:47.298 20:05:35 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:47.298 20:05:35 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:47.298 20:05:35 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:47.298 20:05:35 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:47.298 20:05:35 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:47.298 20:05:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:47.298 20:05:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 73220 00:05:47.298 20:05:35 skip_rpc.skip_rpc -- common/autotest_common.sh@946 -- # '[' -z 73220 ']' 00:05:47.298 20:05:35 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # kill -0 73220 00:05:47.298 20:05:35 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # uname 00:05:47.298 20:05:35 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:47.298 20:05:35 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 73220 00:05:47.298 killing process with pid 73220 00:05:47.298 20:05:35 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:47.298 20:05:35 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:47.298 20:05:35 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 73220' 00:05:47.298 20:05:35 skip_rpc.skip_rpc -- common/autotest_common.sh@965 -- # kill 73220 00:05:47.298 20:05:35 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # wait 73220 00:05:47.298 00:05:47.298 real 0m5.406s 00:05:47.298 user 0m5.020s 00:05:47.298 sys 0m0.288s 00:05:47.298 20:05:36 skip_rpc.skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:47.298 20:05:36 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.298 ************************************ 00:05:47.298 END TEST skip_rpc 00:05:47.298 ************************************ 00:05:47.298 20:05:36 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:47.298 20:05:36 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:47.298 20:05:36 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:47.298 20:05:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.298 ************************************ 00:05:47.298 START TEST skip_rpc_with_json 00:05:47.298 ************************************ 00:05:47.298 20:05:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_json 00:05:47.298 20:05:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:47.298 20:05:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=73312 00:05:47.298 20:05:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:47.298 20:05:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:47.298 20:05:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 73312 00:05:47.298 20:05:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@827 -- # '[' -z 73312 ']' 00:05:47.298 20:05:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.298 20:05:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:47.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.298 20:05:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.298 20:05:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:47.298 20:05:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:47.298 [2024-07-14 20:05:36.174302] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:05:47.298 [2024-07-14 20:05:36.174415] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73312 ] 00:05:47.298 [2024-07-14 20:05:36.314862] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.557 [2024-07-14 20:05:36.411626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.124 20:05:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:48.124 20:05:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # return 0 00:05:48.124 20:05:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:48.124 20:05:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:48.124 20:05:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:48.124 [2024-07-14 20:05:37.157793] nvmf_rpc.c:2558:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:48.124 2024/07/14 20:05:37 error on JSON-RPC call, method: nvmf_get_transports, params: map[trtype:tcp], err: error received for nvmf_get_transports method, err: Code=-19 Msg=No such device 00:05:48.124 request: 00:05:48.124 { 00:05:48.124 "method": "nvmf_get_transports", 00:05:48.124 "params": { 00:05:48.124 "trtype": "tcp" 00:05:48.124 } 00:05:48.124 } 00:05:48.124 Got JSON-RPC error response 00:05:48.124 GoRPCClient: error on JSON-RPC call 00:05:48.124 20:05:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:48.124 20:05:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:48.124 20:05:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:48.124 20:05:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:48.124 [2024-07-14 20:05:37.169976] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:48.124 20:05:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:48.124 20:05:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:48.124 20:05:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:48.124 20:05:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:48.383 20:05:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:48.383 20:05:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:48.383 { 00:05:48.383 "subsystems": [ 00:05:48.383 { 00:05:48.383 "subsystem": "keyring", 00:05:48.383 "config": [] 00:05:48.383 }, 00:05:48.383 { 00:05:48.383 "subsystem": "iobuf", 00:05:48.383 "config": [ 00:05:48.383 { 00:05:48.383 "method": "iobuf_set_options", 00:05:48.383 "params": { 00:05:48.383 "large_bufsize": 135168, 00:05:48.383 "large_pool_count": 1024, 00:05:48.383 "small_bufsize": 8192, 00:05:48.383 "small_pool_count": 8192 00:05:48.383 } 00:05:48.383 } 00:05:48.383 ] 00:05:48.383 }, 00:05:48.383 { 00:05:48.383 "subsystem": "sock", 00:05:48.383 "config": [ 00:05:48.383 { 00:05:48.383 "method": "sock_set_default_impl", 00:05:48.383 "params": { 00:05:48.383 "impl_name": "posix" 00:05:48.383 } 00:05:48.383 }, 00:05:48.383 { 00:05:48.383 "method": "sock_impl_set_options", 00:05:48.383 "params": { 00:05:48.384 "enable_ktls": false, 00:05:48.384 "enable_placement_id": 0, 00:05:48.384 "enable_quickack": false, 00:05:48.384 "enable_recv_pipe": true, 00:05:48.384 "enable_zerocopy_send_client": false, 00:05:48.384 "enable_zerocopy_send_server": true, 00:05:48.384 "impl_name": "ssl", 00:05:48.384 "recv_buf_size": 4096, 00:05:48.384 "send_buf_size": 4096, 00:05:48.384 "tls_version": 0, 00:05:48.384 "zerocopy_threshold": 0 00:05:48.384 } 00:05:48.384 }, 00:05:48.384 { 00:05:48.384 "method": "sock_impl_set_options", 00:05:48.384 "params": { 00:05:48.384 "enable_ktls": false, 00:05:48.384 "enable_placement_id": 0, 00:05:48.384 "enable_quickack": false, 00:05:48.384 "enable_recv_pipe": true, 00:05:48.384 "enable_zerocopy_send_client": false, 00:05:48.384 "enable_zerocopy_send_server": true, 00:05:48.384 "impl_name": "posix", 00:05:48.384 "recv_buf_size": 2097152, 00:05:48.384 "send_buf_size": 2097152, 00:05:48.384 "tls_version": 0, 00:05:48.384 "zerocopy_threshold": 0 00:05:48.384 } 00:05:48.384 } 00:05:48.384 ] 00:05:48.384 }, 00:05:48.384 { 00:05:48.384 "subsystem": "vmd", 00:05:48.384 "config": [] 00:05:48.384 }, 00:05:48.384 { 00:05:48.384 "subsystem": "accel", 00:05:48.384 "config": [ 00:05:48.384 { 00:05:48.384 "method": "accel_set_options", 00:05:48.384 "params": { 00:05:48.384 "buf_count": 2048, 00:05:48.384 "large_cache_size": 16, 00:05:48.384 "sequence_count": 2048, 00:05:48.384 "small_cache_size": 128, 00:05:48.384 "task_count": 2048 00:05:48.384 } 00:05:48.384 } 00:05:48.384 ] 00:05:48.384 }, 00:05:48.384 { 00:05:48.384 "subsystem": "bdev", 00:05:48.384 "config": [ 00:05:48.384 { 00:05:48.384 "method": "bdev_set_options", 00:05:48.384 "params": { 00:05:48.384 "bdev_auto_examine": true, 00:05:48.384 "bdev_io_cache_size": 256, 00:05:48.384 "bdev_io_pool_size": 65535, 00:05:48.384 "iobuf_large_cache_size": 16, 00:05:48.384 "iobuf_small_cache_size": 128 00:05:48.384 } 00:05:48.384 }, 00:05:48.384 { 00:05:48.384 "method": "bdev_raid_set_options", 00:05:48.384 "params": { 00:05:48.384 "process_window_size_kb": 1024 00:05:48.384 } 00:05:48.384 }, 00:05:48.384 { 00:05:48.384 "method": "bdev_iscsi_set_options", 00:05:48.384 "params": { 00:05:48.384 "timeout_sec": 30 00:05:48.384 } 00:05:48.384 }, 00:05:48.384 { 00:05:48.384 "method": "bdev_nvme_set_options", 00:05:48.384 "params": { 00:05:48.384 "action_on_timeout": "none", 00:05:48.384 "allow_accel_sequence": false, 00:05:48.384 "arbitration_burst": 0, 00:05:48.384 "bdev_retry_count": 3, 00:05:48.384 "ctrlr_loss_timeout_sec": 0, 00:05:48.384 "delay_cmd_submit": true, 00:05:48.384 "dhchap_dhgroups": [ 00:05:48.384 "null", 00:05:48.384 "ffdhe2048", 00:05:48.384 "ffdhe3072", 00:05:48.384 "ffdhe4096", 00:05:48.384 "ffdhe6144", 00:05:48.384 "ffdhe8192" 00:05:48.384 ], 00:05:48.384 "dhchap_digests": [ 00:05:48.384 "sha256", 00:05:48.384 "sha384", 00:05:48.384 "sha512" 00:05:48.384 ], 00:05:48.384 "disable_auto_failback": false, 00:05:48.384 "fast_io_fail_timeout_sec": 0, 00:05:48.384 "generate_uuids": false, 00:05:48.384 "high_priority_weight": 0, 00:05:48.384 "io_path_stat": false, 00:05:48.384 "io_queue_requests": 0, 00:05:48.384 "keep_alive_timeout_ms": 10000, 00:05:48.384 "low_priority_weight": 0, 00:05:48.384 "medium_priority_weight": 0, 00:05:48.384 "nvme_adminq_poll_period_us": 10000, 00:05:48.384 "nvme_error_stat": false, 00:05:48.384 "nvme_ioq_poll_period_us": 0, 00:05:48.384 "rdma_cm_event_timeout_ms": 0, 00:05:48.384 "rdma_max_cq_size": 0, 00:05:48.384 "rdma_srq_size": 0, 00:05:48.384 "reconnect_delay_sec": 0, 00:05:48.384 "timeout_admin_us": 0, 00:05:48.384 "timeout_us": 0, 00:05:48.384 "transport_ack_timeout": 0, 00:05:48.384 "transport_retry_count": 4, 00:05:48.384 "transport_tos": 0 00:05:48.384 } 00:05:48.384 }, 00:05:48.384 { 00:05:48.384 "method": "bdev_nvme_set_hotplug", 00:05:48.384 "params": { 00:05:48.384 "enable": false, 00:05:48.384 "period_us": 100000 00:05:48.384 } 00:05:48.384 }, 00:05:48.384 { 00:05:48.384 "method": "bdev_wait_for_examine" 00:05:48.384 } 00:05:48.384 ] 00:05:48.384 }, 00:05:48.384 { 00:05:48.384 "subsystem": "scsi", 00:05:48.384 "config": null 00:05:48.384 }, 00:05:48.384 { 00:05:48.384 "subsystem": "scheduler", 00:05:48.384 "config": [ 00:05:48.384 { 00:05:48.384 "method": "framework_set_scheduler", 00:05:48.384 "params": { 00:05:48.384 "name": "static" 00:05:48.384 } 00:05:48.384 } 00:05:48.384 ] 00:05:48.384 }, 00:05:48.384 { 00:05:48.384 "subsystem": "vhost_scsi", 00:05:48.384 "config": [] 00:05:48.384 }, 00:05:48.384 { 00:05:48.384 "subsystem": "vhost_blk", 00:05:48.384 "config": [] 00:05:48.384 }, 00:05:48.384 { 00:05:48.384 "subsystem": "ublk", 00:05:48.384 "config": [] 00:05:48.384 }, 00:05:48.384 { 00:05:48.384 "subsystem": "nbd", 00:05:48.384 "config": [] 00:05:48.384 }, 00:05:48.384 { 00:05:48.384 "subsystem": "nvmf", 00:05:48.384 "config": [ 00:05:48.384 { 00:05:48.384 "method": "nvmf_set_config", 00:05:48.384 "params": { 00:05:48.384 "admin_cmd_passthru": { 00:05:48.384 "identify_ctrlr": false 00:05:48.384 }, 00:05:48.384 "discovery_filter": "match_any" 00:05:48.384 } 00:05:48.384 }, 00:05:48.384 { 00:05:48.384 "method": "nvmf_set_max_subsystems", 00:05:48.384 "params": { 00:05:48.384 "max_subsystems": 1024 00:05:48.384 } 00:05:48.384 }, 00:05:48.384 { 00:05:48.384 "method": "nvmf_set_crdt", 00:05:48.384 "params": { 00:05:48.384 "crdt1": 0, 00:05:48.384 "crdt2": 0, 00:05:48.384 "crdt3": 0 00:05:48.384 } 00:05:48.384 }, 00:05:48.384 { 00:05:48.384 "method": "nvmf_create_transport", 00:05:48.384 "params": { 00:05:48.384 "abort_timeout_sec": 1, 00:05:48.384 "ack_timeout": 0, 00:05:48.384 "buf_cache_size": 4294967295, 00:05:48.384 "c2h_success": true, 00:05:48.384 "data_wr_pool_size": 0, 00:05:48.384 "dif_insert_or_strip": false, 00:05:48.384 "in_capsule_data_size": 4096, 00:05:48.384 "io_unit_size": 131072, 00:05:48.384 "max_aq_depth": 128, 00:05:48.385 "max_io_qpairs_per_ctrlr": 127, 00:05:48.385 "max_io_size": 131072, 00:05:48.385 "max_queue_depth": 128, 00:05:48.385 "num_shared_buffers": 511, 00:05:48.385 "sock_priority": 0, 00:05:48.385 "trtype": "TCP", 00:05:48.385 "zcopy": false 00:05:48.385 } 00:05:48.385 } 00:05:48.385 ] 00:05:48.385 }, 00:05:48.385 { 00:05:48.385 "subsystem": "iscsi", 00:05:48.385 "config": [ 00:05:48.385 { 00:05:48.385 "method": "iscsi_set_options", 00:05:48.385 "params": { 00:05:48.385 "allow_duplicated_isid": false, 00:05:48.385 "chap_group": 0, 00:05:48.385 "data_out_pool_size": 2048, 00:05:48.385 "default_time2retain": 20, 00:05:48.385 "default_time2wait": 2, 00:05:48.385 "disable_chap": false, 00:05:48.385 "error_recovery_level": 0, 00:05:48.385 "first_burst_length": 8192, 00:05:48.385 "immediate_data": true, 00:05:48.385 "immediate_data_pool_size": 16384, 00:05:48.385 "max_connections_per_session": 2, 00:05:48.385 "max_large_datain_per_connection": 64, 00:05:48.385 "max_queue_depth": 64, 00:05:48.385 "max_r2t_per_connection": 4, 00:05:48.385 "max_sessions": 128, 00:05:48.385 "mutual_chap": false, 00:05:48.385 "node_base": "iqn.2016-06.io.spdk", 00:05:48.385 "nop_in_interval": 30, 00:05:48.385 "nop_timeout": 60, 00:05:48.385 "pdu_pool_size": 36864, 00:05:48.385 "require_chap": false 00:05:48.385 } 00:05:48.385 } 00:05:48.385 ] 00:05:48.385 } 00:05:48.385 ] 00:05:48.385 } 00:05:48.385 20:05:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:48.385 20:05:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 73312 00:05:48.385 20:05:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 73312 ']' 00:05:48.385 20:05:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 73312 00:05:48.385 20:05:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:05:48.385 20:05:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:48.385 20:05:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 73312 00:05:48.385 killing process with pid 73312 00:05:48.385 20:05:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:48.385 20:05:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:48.385 20:05:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 73312' 00:05:48.385 20:05:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 73312 00:05:48.385 20:05:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 73312 00:05:48.953 20:05:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=73352 00:05:48.953 20:05:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:48.953 20:05:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:54.247 20:05:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 73352 00:05:54.247 20:05:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 73352 ']' 00:05:54.247 20:05:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 73352 00:05:54.247 20:05:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:05:54.247 20:05:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:54.247 20:05:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 73352 00:05:54.247 killing process with pid 73352 00:05:54.247 20:05:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:54.247 20:05:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:54.247 20:05:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 73352' 00:05:54.247 20:05:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 73352 00:05:54.247 20:05:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 73352 00:05:54.247 20:05:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:54.247 20:05:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:54.247 ************************************ 00:05:54.247 END TEST skip_rpc_with_json 00:05:54.247 ************************************ 00:05:54.247 00:05:54.247 real 0m7.029s 00:05:54.247 user 0m6.744s 00:05:54.247 sys 0m0.680s 00:05:54.247 20:05:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:54.247 20:05:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:54.247 20:05:43 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:54.247 20:05:43 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:54.247 20:05:43 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:54.247 20:05:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.247 ************************************ 00:05:54.247 START TEST skip_rpc_with_delay 00:05:54.247 ************************************ 00:05:54.247 20:05:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_delay 00:05:54.247 20:05:43 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:54.247 20:05:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:54.247 20:05:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:54.247 20:05:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:54.247 20:05:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:54.247 20:05:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:54.247 20:05:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:54.247 20:05:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:54.247 20:05:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:54.247 20:05:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:54.247 20:05:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:54.247 20:05:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:54.247 [2024-07-14 20:05:43.260235] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:54.247 [2024-07-14 20:05:43.261121] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:54.247 20:05:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:54.247 20:05:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:54.247 20:05:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:54.247 20:05:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:54.247 00:05:54.247 real 0m0.095s 00:05:54.247 user 0m0.058s 00:05:54.247 sys 0m0.035s 00:05:54.247 20:05:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:54.247 20:05:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:54.247 ************************************ 00:05:54.247 END TEST skip_rpc_with_delay 00:05:54.247 ************************************ 00:05:54.247 20:05:43 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:54.247 20:05:43 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:54.247 20:05:43 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:54.247 20:05:43 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:54.247 20:05:43 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:54.247 20:05:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.504 ************************************ 00:05:54.504 START TEST exit_on_failed_rpc_init 00:05:54.504 ************************************ 00:05:54.504 20:05:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1121 -- # test_exit_on_failed_rpc_init 00:05:54.504 20:05:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=73460 00:05:54.504 20:05:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 73460 00:05:54.504 20:05:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@827 -- # '[' -z 73460 ']' 00:05:54.504 20:05:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:54.504 20:05:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.504 20:05:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:54.504 20:05:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.504 20:05:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:54.504 20:05:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:54.504 [2024-07-14 20:05:43.403312] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:05:54.504 [2024-07-14 20:05:43.403846] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73460 ] 00:05:54.504 [2024-07-14 20:05:43.544009] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.763 [2024-07-14 20:05:43.637672] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.328 20:05:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:55.328 20:05:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # return 0 00:05:55.328 20:05:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:55.328 20:05:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:55.328 20:05:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:55.328 20:05:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:55.328 20:05:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:55.328 20:05:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:55.328 20:05:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:55.328 20:05:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:55.328 20:05:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:55.328 20:05:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:55.328 20:05:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:55.328 20:05:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:55.329 20:05:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:55.587 [2024-07-14 20:05:44.438947] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:05:55.587 [2024-07-14 20:05:44.439025] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73491 ] 00:05:55.587 [2024-07-14 20:05:44.572656] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.587 [2024-07-14 20:05:44.634274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:55.587 [2024-07-14 20:05:44.634660] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:55.587 [2024-07-14 20:05:44.634828] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:55.587 [2024-07-14 20:05:44.634992] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:55.844 20:05:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:05:55.844 20:05:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:55.844 20:05:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:05:55.844 20:05:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:05:55.844 20:05:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:05:55.844 20:05:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:55.844 20:05:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:55.844 20:05:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 73460 00:05:55.844 20:05:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@946 -- # '[' -z 73460 ']' 00:05:55.844 20:05:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # kill -0 73460 00:05:55.844 20:05:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # uname 00:05:55.844 20:05:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:55.844 20:05:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 73460 00:05:55.844 20:05:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:55.844 20:05:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:55.844 killing process with pid 73460 00:05:55.844 20:05:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 73460' 00:05:55.844 20:05:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@965 -- # kill 73460 00:05:55.844 20:05:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # wait 73460 00:05:56.102 00:05:56.103 real 0m1.751s 00:05:56.103 user 0m2.001s 00:05:56.103 sys 0m0.428s 00:05:56.103 20:05:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:56.103 20:05:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:56.103 ************************************ 00:05:56.103 END TEST exit_on_failed_rpc_init 00:05:56.103 ************************************ 00:05:56.103 20:05:45 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:56.103 00:05:56.103 real 0m14.584s 00:05:56.103 user 0m13.938s 00:05:56.103 sys 0m1.599s 00:05:56.103 20:05:45 skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:56.103 ************************************ 00:05:56.103 END TEST skip_rpc 00:05:56.103 20:05:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.103 ************************************ 00:05:56.103 20:05:45 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:56.103 20:05:45 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:56.103 20:05:45 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:56.103 20:05:45 -- common/autotest_common.sh@10 -- # set +x 00:05:56.103 ************************************ 00:05:56.103 START TEST rpc_client 00:05:56.103 ************************************ 00:05:56.103 20:05:45 rpc_client -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:56.361 * Looking for test storage... 00:05:56.361 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:56.361 20:05:45 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:56.361 OK 00:05:56.361 20:05:45 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:56.361 00:05:56.361 real 0m0.104s 00:05:56.361 user 0m0.040s 00:05:56.361 sys 0m0.071s 00:05:56.361 20:05:45 rpc_client -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:56.361 ************************************ 00:05:56.361 END TEST rpc_client 00:05:56.361 ************************************ 00:05:56.361 20:05:45 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:56.361 20:05:45 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:56.361 20:05:45 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:56.361 20:05:45 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:56.361 20:05:45 -- common/autotest_common.sh@10 -- # set +x 00:05:56.361 ************************************ 00:05:56.361 START TEST json_config 00:05:56.361 ************************************ 00:05:56.361 20:05:45 json_config -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:56.361 20:05:45 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:56.361 20:05:45 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:56.361 20:05:45 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:56.361 20:05:45 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:56.361 20:05:45 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:56.361 20:05:45 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:56.361 20:05:45 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:56.361 20:05:45 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:56.361 20:05:45 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:56.361 20:05:45 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:56.361 20:05:45 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:56.361 20:05:45 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:56.361 20:05:45 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:05:56.361 20:05:45 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:05:56.361 20:05:45 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:56.361 20:05:45 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:56.361 20:05:45 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:56.361 20:05:45 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:56.361 20:05:45 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:56.361 20:05:45 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:56.361 20:05:45 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:56.361 20:05:45 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:56.361 20:05:45 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.361 20:05:45 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.361 20:05:45 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.361 20:05:45 json_config -- paths/export.sh@5 -- # export PATH 00:05:56.361 20:05:45 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.361 20:05:45 json_config -- nvmf/common.sh@47 -- # : 0 00:05:56.361 20:05:45 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:56.361 20:05:45 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:56.361 20:05:45 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:56.361 20:05:45 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:56.361 20:05:45 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:56.361 20:05:45 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:56.361 20:05:45 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:56.361 20:05:45 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:56.361 20:05:45 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:56.361 20:05:45 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:56.361 20:05:45 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:56.361 20:05:45 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:56.361 20:05:45 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:56.361 20:05:45 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:56.361 20:05:45 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:56.361 20:05:45 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:56.361 20:05:45 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:56.361 20:05:45 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:56.361 20:05:45 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:56.361 20:05:45 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:05:56.361 20:05:45 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:56.361 20:05:45 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:56.361 20:05:45 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:56.362 20:05:45 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:56.362 INFO: JSON configuration test init 00:05:56.362 20:05:45 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:56.362 20:05:45 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:56.362 20:05:45 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:56.362 20:05:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:56.362 20:05:45 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:56.362 20:05:45 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:56.362 20:05:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:56.362 20:05:45 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:56.362 20:05:45 json_config -- json_config/common.sh@9 -- # local app=target 00:05:56.362 20:05:45 json_config -- json_config/common.sh@10 -- # shift 00:05:56.362 20:05:45 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:56.362 20:05:45 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:56.362 20:05:45 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:56.362 20:05:45 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:56.362 20:05:45 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:56.362 20:05:45 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=73614 00:05:56.362 20:05:45 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:56.362 Waiting for target to run... 00:05:56.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:56.362 20:05:45 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:56.362 20:05:45 json_config -- json_config/common.sh@25 -- # waitforlisten 73614 /var/tmp/spdk_tgt.sock 00:05:56.362 20:05:45 json_config -- common/autotest_common.sh@827 -- # '[' -z 73614 ']' 00:05:56.362 20:05:45 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:56.362 20:05:45 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:56.362 20:05:45 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:56.362 20:05:45 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:56.362 20:05:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:56.620 [2024-07-14 20:05:45.502419] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:05:56.620 [2024-07-14 20:05:45.502762] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73614 ] 00:05:56.879 [2024-07-14 20:05:45.928781] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.138 [2024-07-14 20:05:45.979679] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.397 00:05:57.397 20:05:46 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:57.397 20:05:46 json_config -- common/autotest_common.sh@860 -- # return 0 00:05:57.397 20:05:46 json_config -- json_config/common.sh@26 -- # echo '' 00:05:57.397 20:05:46 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:57.397 20:05:46 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:57.397 20:05:46 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:57.397 20:05:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:57.397 20:05:46 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:57.397 20:05:46 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:57.397 20:05:46 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:57.397 20:05:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:57.654 20:05:46 json_config -- json_config/json_config.sh@273 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:57.654 20:05:46 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:57.654 20:05:46 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:57.913 20:05:46 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:57.913 20:05:46 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:57.913 20:05:46 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:57.913 20:05:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:57.913 20:05:46 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:57.913 20:05:46 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:57.913 20:05:46 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:57.913 20:05:46 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:57.913 20:05:46 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:57.913 20:05:46 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:58.480 20:05:47 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:58.480 20:05:47 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:58.480 20:05:47 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:58.480 20:05:47 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:58.480 20:05:47 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:58.480 20:05:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:58.480 20:05:47 json_config -- json_config/json_config.sh@55 -- # return 0 00:05:58.480 20:05:47 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:58.480 20:05:47 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:58.480 20:05:47 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:58.480 20:05:47 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:58.480 20:05:47 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:58.480 20:05:47 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:58.480 20:05:47 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:58.480 20:05:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:58.480 20:05:47 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:58.480 20:05:47 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:05:58.480 20:05:47 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:05:58.480 20:05:47 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:58.480 20:05:47 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:58.739 MallocForNvmf0 00:05:58.739 20:05:47 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:58.739 20:05:47 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:58.998 MallocForNvmf1 00:05:58.998 20:05:47 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:58.998 20:05:47 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:59.256 [2024-07-14 20:05:48.129787] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:59.256 20:05:48 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:59.256 20:05:48 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:59.514 20:05:48 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:59.514 20:05:48 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:59.773 20:05:48 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:59.773 20:05:48 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:00.032 20:05:48 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:00.032 20:05:48 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:00.291 [2024-07-14 20:05:49.126307] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:00.291 20:05:49 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:06:00.291 20:05:49 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:00.291 20:05:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:00.291 20:05:49 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:06:00.291 20:05:49 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:00.291 20:05:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:00.291 20:05:49 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:06:00.291 20:05:49 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:00.291 20:05:49 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:00.613 MallocBdevForConfigChangeCheck 00:06:00.613 20:05:49 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:06:00.613 20:05:49 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:00.613 20:05:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:00.613 20:05:49 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:06:00.613 20:05:49 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:00.871 INFO: shutting down applications... 00:06:00.871 20:05:49 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:06:00.871 20:05:49 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:06:00.871 20:05:49 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:06:00.871 20:05:49 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:06:00.871 20:05:49 json_config -- json_config/json_config.sh@333 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:01.129 Calling clear_iscsi_subsystem 00:06:01.129 Calling clear_nvmf_subsystem 00:06:01.129 Calling clear_nbd_subsystem 00:06:01.129 Calling clear_ublk_subsystem 00:06:01.129 Calling clear_vhost_blk_subsystem 00:06:01.129 Calling clear_vhost_scsi_subsystem 00:06:01.129 Calling clear_bdev_subsystem 00:06:01.129 20:05:50 json_config -- json_config/json_config.sh@337 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:06:01.129 20:05:50 json_config -- json_config/json_config.sh@343 -- # count=100 00:06:01.129 20:05:50 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:06:01.129 20:05:50 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:01.129 20:05:50 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:06:01.129 20:05:50 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:01.696 20:05:50 json_config -- json_config/json_config.sh@345 -- # break 00:06:01.696 20:05:50 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:06:01.696 20:05:50 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:06:01.696 20:05:50 json_config -- json_config/common.sh@31 -- # local app=target 00:06:01.696 20:05:50 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:01.696 20:05:50 json_config -- json_config/common.sh@35 -- # [[ -n 73614 ]] 00:06:01.696 20:05:50 json_config -- json_config/common.sh@38 -- # kill -SIGINT 73614 00:06:01.696 20:05:50 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:01.696 20:05:50 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:01.696 20:05:50 json_config -- json_config/common.sh@41 -- # kill -0 73614 00:06:01.696 20:05:50 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:02.263 20:05:51 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:02.263 20:05:51 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:02.263 20:05:51 json_config -- json_config/common.sh@41 -- # kill -0 73614 00:06:02.263 20:05:51 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:02.263 20:05:51 json_config -- json_config/common.sh@43 -- # break 00:06:02.263 20:05:51 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:02.263 SPDK target shutdown done 00:06:02.263 20:05:51 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:02.263 INFO: relaunching applications... 00:06:02.263 20:05:51 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:06:02.263 20:05:51 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:02.263 20:05:51 json_config -- json_config/common.sh@9 -- # local app=target 00:06:02.263 20:05:51 json_config -- json_config/common.sh@10 -- # shift 00:06:02.263 20:05:51 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:02.263 20:05:51 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:02.263 20:05:51 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:02.263 20:05:51 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:02.263 20:05:51 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:02.263 20:05:51 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=73884 00:06:02.263 Waiting for target to run... 00:06:02.263 20:05:51 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:02.263 20:05:51 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:02.263 20:05:51 json_config -- json_config/common.sh@25 -- # waitforlisten 73884 /var/tmp/spdk_tgt.sock 00:06:02.263 20:05:51 json_config -- common/autotest_common.sh@827 -- # '[' -z 73884 ']' 00:06:02.263 20:05:51 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:02.263 20:05:51 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:02.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:02.263 20:05:51 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:02.263 20:05:51 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:02.263 20:05:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:02.263 [2024-07-14 20:05:51.137530] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:02.263 [2024-07-14 20:05:51.137641] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73884 ] 00:06:02.521 [2024-07-14 20:05:51.553919] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.779 [2024-07-14 20:05:51.625619] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.038 [2024-07-14 20:05:51.932044] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:03.038 [2024-07-14 20:05:51.964091] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:03.038 20:05:52 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:03.038 20:05:52 json_config -- common/autotest_common.sh@860 -- # return 0 00:06:03.038 00:06:03.038 20:05:52 json_config -- json_config/common.sh@26 -- # echo '' 00:06:03.038 20:05:52 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:06:03.038 INFO: Checking if target configuration is the same... 00:06:03.038 20:05:52 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:03.038 20:05:52 json_config -- json_config/json_config.sh@378 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:03.038 20:05:52 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:06:03.038 20:05:52 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:03.038 + '[' 2 -ne 2 ']' 00:06:03.038 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:06:03.038 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:06:03.038 + rootdir=/home/vagrant/spdk_repo/spdk 00:06:03.038 +++ basename /dev/fd/62 00:06:03.038 ++ mktemp /tmp/62.XXX 00:06:03.038 + tmp_file_1=/tmp/62.Ldk 00:06:03.038 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:03.038 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:03.038 + tmp_file_2=/tmp/spdk_tgt_config.json.LxF 00:06:03.038 + ret=0 00:06:03.038 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:03.604 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:03.604 + diff -u /tmp/62.Ldk /tmp/spdk_tgt_config.json.LxF 00:06:03.604 INFO: JSON config files are the same 00:06:03.604 + echo 'INFO: JSON config files are the same' 00:06:03.604 + rm /tmp/62.Ldk /tmp/spdk_tgt_config.json.LxF 00:06:03.604 + exit 0 00:06:03.604 20:05:52 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:06:03.604 INFO: changing configuration and checking if this can be detected... 00:06:03.604 20:05:52 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:03.604 20:05:52 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:03.604 20:05:52 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:03.862 20:05:52 json_config -- json_config/json_config.sh@387 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:03.862 20:05:52 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:06:03.862 20:05:52 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:03.862 + '[' 2 -ne 2 ']' 00:06:03.862 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:06:03.862 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:06:03.862 + rootdir=/home/vagrant/spdk_repo/spdk 00:06:03.862 +++ basename /dev/fd/62 00:06:03.862 ++ mktemp /tmp/62.XXX 00:06:03.862 + tmp_file_1=/tmp/62.wK6 00:06:03.862 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:03.862 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:03.862 + tmp_file_2=/tmp/spdk_tgt_config.json.9Gl 00:06:03.862 + ret=0 00:06:03.862 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:04.120 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:04.120 + diff -u /tmp/62.wK6 /tmp/spdk_tgt_config.json.9Gl 00:06:04.120 + ret=1 00:06:04.120 + echo '=== Start of file: /tmp/62.wK6 ===' 00:06:04.120 + cat /tmp/62.wK6 00:06:04.120 + echo '=== End of file: /tmp/62.wK6 ===' 00:06:04.120 + echo '' 00:06:04.120 + echo '=== Start of file: /tmp/spdk_tgt_config.json.9Gl ===' 00:06:04.120 + cat /tmp/spdk_tgt_config.json.9Gl 00:06:04.120 + echo '=== End of file: /tmp/spdk_tgt_config.json.9Gl ===' 00:06:04.120 + echo '' 00:06:04.120 + rm /tmp/62.wK6 /tmp/spdk_tgt_config.json.9Gl 00:06:04.120 + exit 1 00:06:04.120 INFO: configuration change detected. 00:06:04.120 20:05:53 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:06:04.120 20:05:53 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:06:04.120 20:05:53 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:06:04.120 20:05:53 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:04.120 20:05:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:04.120 20:05:53 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:06:04.120 20:05:53 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:06:04.120 20:05:53 json_config -- json_config/json_config.sh@317 -- # [[ -n 73884 ]] 00:06:04.120 20:05:53 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:06:04.120 20:05:53 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:06:04.120 20:05:53 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:04.120 20:05:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:04.377 20:05:53 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:06:04.377 20:05:53 json_config -- json_config/json_config.sh@193 -- # uname -s 00:06:04.377 20:05:53 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:06:04.377 20:05:53 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:06:04.377 20:05:53 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:06:04.377 20:05:53 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:06:04.377 20:05:53 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:04.377 20:05:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:04.377 20:05:53 json_config -- json_config/json_config.sh@323 -- # killprocess 73884 00:06:04.377 20:05:53 json_config -- common/autotest_common.sh@946 -- # '[' -z 73884 ']' 00:06:04.377 20:05:53 json_config -- common/autotest_common.sh@950 -- # kill -0 73884 00:06:04.377 20:05:53 json_config -- common/autotest_common.sh@951 -- # uname 00:06:04.377 20:05:53 json_config -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:04.377 20:05:53 json_config -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 73884 00:06:04.377 20:05:53 json_config -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:04.377 20:05:53 json_config -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:04.377 killing process with pid 73884 00:06:04.377 20:05:53 json_config -- common/autotest_common.sh@964 -- # echo 'killing process with pid 73884' 00:06:04.377 20:05:53 json_config -- common/autotest_common.sh@965 -- # kill 73884 00:06:04.377 20:05:53 json_config -- common/autotest_common.sh@970 -- # wait 73884 00:06:04.634 20:05:53 json_config -- json_config/json_config.sh@326 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:04.634 20:05:53 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:06:04.635 20:05:53 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:04.635 20:05:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:04.635 20:05:53 json_config -- json_config/json_config.sh@328 -- # return 0 00:06:04.635 INFO: Success 00:06:04.635 20:05:53 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:06:04.635 00:06:04.635 real 0m8.205s 00:06:04.635 user 0m11.662s 00:06:04.635 sys 0m1.856s 00:06:04.635 20:05:53 json_config -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:04.635 20:05:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:04.635 ************************************ 00:06:04.635 END TEST json_config 00:06:04.635 ************************************ 00:06:04.635 20:05:53 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:04.635 20:05:53 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:04.635 20:05:53 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:04.635 20:05:53 -- common/autotest_common.sh@10 -- # set +x 00:06:04.635 ************************************ 00:06:04.635 START TEST json_config_extra_key 00:06:04.635 ************************************ 00:06:04.635 20:05:53 json_config_extra_key -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:04.635 20:05:53 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:04.635 20:05:53 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:04.635 20:05:53 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:04.635 20:05:53 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:04.635 20:05:53 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:04.635 20:05:53 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:04.635 20:05:53 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:04.635 20:05:53 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:04.635 20:05:53 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:04.635 20:05:53 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:04.635 20:05:53 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:04.635 20:05:53 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:04.635 20:05:53 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:06:04.635 20:05:53 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:06:04.635 20:05:53 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:04.635 20:05:53 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:04.635 20:05:53 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:04.635 20:05:53 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:04.635 20:05:53 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:04.635 20:05:53 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:04.635 20:05:53 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:04.635 20:05:53 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:04.635 20:05:53 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:04.635 20:05:53 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:04.635 20:05:53 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:04.635 20:05:53 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:04.635 20:05:53 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:04.635 20:05:53 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:04.635 20:05:53 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:04.635 20:05:53 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:04.635 20:05:53 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:04.635 20:05:53 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:04.635 20:05:53 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:04.635 20:05:53 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:04.635 20:05:53 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:04.635 20:05:53 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:04.635 20:05:53 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:04.635 20:05:53 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:04.635 20:05:53 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:04.635 20:05:53 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:04.635 20:05:53 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:04.635 20:05:53 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:04.635 20:05:53 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:04.635 20:05:53 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:04.635 20:05:53 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:04.635 20:05:53 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:04.635 INFO: launching applications... 00:06:04.635 20:05:53 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:04.635 20:05:53 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:04.635 20:05:53 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:04.635 20:05:53 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:04.635 20:05:53 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:04.635 20:05:53 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:04.635 20:05:53 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:04.635 20:05:53 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:04.635 20:05:53 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:04.635 20:05:53 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=74054 00:06:04.635 Waiting for target to run... 00:06:04.635 20:05:53 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:04.635 20:05:53 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 74054 /var/tmp/spdk_tgt.sock 00:06:04.635 20:05:53 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:04.635 20:05:53 json_config_extra_key -- common/autotest_common.sh@827 -- # '[' -z 74054 ']' 00:06:04.635 20:05:53 json_config_extra_key -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:04.635 20:05:53 json_config_extra_key -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:04.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:04.635 20:05:53 json_config_extra_key -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:04.635 20:05:53 json_config_extra_key -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:04.635 20:05:53 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:04.893 [2024-07-14 20:05:53.741799] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:04.893 [2024-07-14 20:05:53.741911] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74054 ] 00:06:05.150 [2024-07-14 20:05:54.193276] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.408 [2024-07-14 20:05:54.269545] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.666 20:05:54 json_config_extra_key -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:05.666 20:05:54 json_config_extra_key -- common/autotest_common.sh@860 -- # return 0 00:06:05.666 00:06:05.666 20:05:54 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:05.666 INFO: shutting down applications... 00:06:05.666 20:05:54 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:05.666 20:05:54 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:05.666 20:05:54 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:05.666 20:05:54 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:05.666 20:05:54 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 74054 ]] 00:06:05.666 20:05:54 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 74054 00:06:05.666 20:05:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:05.666 20:05:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:05.666 20:05:54 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 74054 00:06:05.666 20:05:54 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:06.233 20:05:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:06.233 20:05:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:06.233 20:05:55 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 74054 00:06:06.233 20:05:55 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:06.233 20:05:55 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:06.233 20:05:55 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:06.233 SPDK target shutdown done 00:06:06.233 20:05:55 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:06.233 Success 00:06:06.233 20:05:55 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:06.233 00:06:06.233 real 0m1.614s 00:06:06.233 user 0m1.463s 00:06:06.233 sys 0m0.460s 00:06:06.233 20:05:55 json_config_extra_key -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:06.233 20:05:55 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:06.233 ************************************ 00:06:06.233 END TEST json_config_extra_key 00:06:06.233 ************************************ 00:06:06.233 20:05:55 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:06.233 20:05:55 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:06.233 20:05:55 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:06.233 20:05:55 -- common/autotest_common.sh@10 -- # set +x 00:06:06.233 ************************************ 00:06:06.233 START TEST alias_rpc 00:06:06.233 ************************************ 00:06:06.233 20:05:55 alias_rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:06.491 * Looking for test storage... 00:06:06.491 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:06.491 20:05:55 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:06.491 20:05:55 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=74130 00:06:06.491 20:05:55 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:06.491 20:05:55 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 74130 00:06:06.491 20:05:55 alias_rpc -- common/autotest_common.sh@827 -- # '[' -z 74130 ']' 00:06:06.491 20:05:55 alias_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.491 20:05:55 alias_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:06.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.491 20:05:55 alias_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.491 20:05:55 alias_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:06.491 20:05:55 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.491 [2024-07-14 20:05:55.411927] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:06.491 [2024-07-14 20:05:55.412056] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74130 ] 00:06:06.491 [2024-07-14 20:05:55.544596] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.749 [2024-07-14 20:05:55.620116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.314 20:05:56 alias_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:07.314 20:05:56 alias_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:07.314 20:05:56 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:07.570 20:05:56 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 74130 00:06:07.570 20:05:56 alias_rpc -- common/autotest_common.sh@946 -- # '[' -z 74130 ']' 00:06:07.570 20:05:56 alias_rpc -- common/autotest_common.sh@950 -- # kill -0 74130 00:06:07.570 20:05:56 alias_rpc -- common/autotest_common.sh@951 -- # uname 00:06:07.570 20:05:56 alias_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:07.570 20:05:56 alias_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74130 00:06:07.828 20:05:56 alias_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:07.828 20:05:56 alias_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:07.828 20:05:56 alias_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74130' 00:06:07.828 killing process with pid 74130 00:06:07.828 20:05:56 alias_rpc -- common/autotest_common.sh@965 -- # kill 74130 00:06:07.828 20:05:56 alias_rpc -- common/autotest_common.sh@970 -- # wait 74130 00:06:08.086 00:06:08.086 real 0m1.793s 00:06:08.086 user 0m1.986s 00:06:08.086 sys 0m0.468s 00:06:08.086 20:05:57 alias_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:08.086 20:05:57 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.086 ************************************ 00:06:08.086 END TEST alias_rpc 00:06:08.086 ************************************ 00:06:08.086 20:05:57 -- spdk/autotest.sh@176 -- # [[ 1 -eq 0 ]] 00:06:08.086 20:05:57 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:08.086 20:05:57 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:08.086 20:05:57 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:08.086 20:05:57 -- common/autotest_common.sh@10 -- # set +x 00:06:08.086 ************************************ 00:06:08.086 START TEST dpdk_mem_utility 00:06:08.086 ************************************ 00:06:08.086 20:05:57 dpdk_mem_utility -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:08.344 * Looking for test storage... 00:06:08.344 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:08.344 20:05:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:08.344 20:05:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=74222 00:06:08.344 20:05:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 74222 00:06:08.344 20:05:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:08.344 20:05:57 dpdk_mem_utility -- common/autotest_common.sh@827 -- # '[' -z 74222 ']' 00:06:08.344 20:05:57 dpdk_mem_utility -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.344 20:05:57 dpdk_mem_utility -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:08.344 20:05:57 dpdk_mem_utility -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.344 20:05:57 dpdk_mem_utility -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:08.344 20:05:57 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:08.344 [2024-07-14 20:05:57.266447] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:08.344 [2024-07-14 20:05:57.266578] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74222 ] 00:06:08.344 [2024-07-14 20:05:57.406975] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.603 [2024-07-14 20:05:57.479219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.539 20:05:58 dpdk_mem_utility -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:09.539 20:05:58 dpdk_mem_utility -- common/autotest_common.sh@860 -- # return 0 00:06:09.539 20:05:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:09.539 20:05:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:09.539 20:05:58 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:09.539 20:05:58 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:09.539 { 00:06:09.539 "filename": "/tmp/spdk_mem_dump.txt" 00:06:09.539 } 00:06:09.539 20:05:58 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:09.539 20:05:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:09.539 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:09.539 1 heaps totaling size 814.000000 MiB 00:06:09.539 size: 814.000000 MiB heap id: 0 00:06:09.539 end heaps---------- 00:06:09.539 8 mempools totaling size 598.116089 MiB 00:06:09.539 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:09.539 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:09.539 size: 84.521057 MiB name: bdev_io_74222 00:06:09.539 size: 51.011292 MiB name: evtpool_74222 00:06:09.539 size: 50.003479 MiB name: msgpool_74222 00:06:09.539 size: 21.763794 MiB name: PDU_Pool 00:06:09.539 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:09.539 size: 0.026123 MiB name: Session_Pool 00:06:09.539 end mempools------- 00:06:09.539 6 memzones totaling size 4.142822 MiB 00:06:09.539 size: 1.000366 MiB name: RG_ring_0_74222 00:06:09.539 size: 1.000366 MiB name: RG_ring_1_74222 00:06:09.539 size: 1.000366 MiB name: RG_ring_4_74222 00:06:09.539 size: 1.000366 MiB name: RG_ring_5_74222 00:06:09.539 size: 0.125366 MiB name: RG_ring_2_74222 00:06:09.539 size: 0.015991 MiB name: RG_ring_3_74222 00:06:09.539 end memzones------- 00:06:09.540 20:05:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:09.540 heap id: 0 total size: 814.000000 MiB number of busy elements: 218 number of free elements: 15 00:06:09.540 list of free elements. size: 12.486938 MiB 00:06:09.540 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:09.540 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:09.540 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:09.540 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:09.540 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:09.540 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:09.540 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:09.540 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:09.540 element at address: 0x200000200000 with size: 0.837036 MiB 00:06:09.540 element at address: 0x20001aa00000 with size: 0.572266 MiB 00:06:09.540 element at address: 0x20000b200000 with size: 0.489807 MiB 00:06:09.540 element at address: 0x200000800000 with size: 0.487061 MiB 00:06:09.540 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:09.540 element at address: 0x200027e00000 with size: 0.398315 MiB 00:06:09.540 element at address: 0x200003a00000 with size: 0.351685 MiB 00:06:09.540 list of standard malloc elements. size: 199.250488 MiB 00:06:09.540 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:09.540 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:09.540 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:09.540 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:09.540 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:09.540 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:09.540 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:09.540 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:09.540 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:09.540 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:06:09.540 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:06:09.540 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:06:09.540 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:06:09.540 element at address: 0x2000002d6780 with size: 0.000183 MiB 00:06:09.540 element at address: 0x2000002d6840 with size: 0.000183 MiB 00:06:09.540 element at address: 0x2000002d6900 with size: 0.000183 MiB 00:06:09.540 element at address: 0x2000002d69c0 with size: 0.000183 MiB 00:06:09.540 element at address: 0x2000002d6a80 with size: 0.000183 MiB 00:06:09.540 element at address: 0x2000002d6b40 with size: 0.000183 MiB 00:06:09.540 element at address: 0x2000002d6c00 with size: 0.000183 MiB 00:06:09.540 element at address: 0x2000002d6cc0 with size: 0.000183 MiB 00:06:09.540 element at address: 0x2000002d6d80 with size: 0.000183 MiB 00:06:09.540 element at address: 0x2000002d6e40 with size: 0.000183 MiB 00:06:09.540 element at address: 0x2000002d6f00 with size: 0.000183 MiB 00:06:09.540 element at address: 0x2000002d6fc0 with size: 0.000183 MiB 00:06:09.540 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:06:09.540 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:06:09.540 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:06:09.540 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:06:09.540 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:06:09.540 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:06:09.540 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:06:09.540 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:06:09.540 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:06:09.540 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:06:09.540 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:06:09.540 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:06:09.540 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:09.540 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:09.540 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:09.540 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:09.540 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:06:09.540 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:06:09.540 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:06:09.540 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:06:09.540 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:09.540 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:09.540 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:09.540 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:06:09.540 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:06:09.540 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:06:09.540 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:06:09.540 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:06:09.540 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:06:09.540 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:06:09.540 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:06:09.540 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:06:09.540 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:06:09.540 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:06:09.540 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:06:09.540 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:06:09.540 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:06:09.540 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:06:09.540 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:06:09.540 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:06:09.540 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:06:09.540 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:06:09.540 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:06:09.540 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:06:09.540 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:09.540 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:09.540 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:09.540 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:09.540 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:09.540 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:09.540 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:09.540 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:09.540 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:06:09.540 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:06:09.540 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:06:09.540 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:06:09.540 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:06:09.540 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:09.540 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:09.540 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:09.540 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:09.540 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:09.540 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:09.540 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:09.540 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:06:09.540 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:06:09.540 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:06:09.540 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:06:09.540 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:06:09.540 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:06:09.540 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:06:09.540 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:06:09.540 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:06:09.540 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:06:09.540 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:06:09.540 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:06:09.540 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:06:09.540 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:06:09.540 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:06:09.540 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:06:09.540 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:06:09.540 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:06:09.540 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:06:09.540 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:06:09.540 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:06:09.540 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:06:09.540 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:06:09.540 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:06:09.540 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:06:09.540 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:06:09.540 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:06:09.540 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:06:09.540 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:06:09.540 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:06:09.540 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:06:09.540 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:06:09.540 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:06:09.540 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:06:09.540 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:06:09.540 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:06:09.540 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:06:09.540 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:06:09.540 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:06:09.540 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:06:09.540 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:06:09.540 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:06:09.540 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:06:09.540 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:06:09.540 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:06:09.540 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:06:09.540 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:06:09.540 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:06:09.540 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:06:09.540 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:06:09.540 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:06:09.540 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:06:09.540 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:06:09.541 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:06:09.541 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:06:09.541 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:06:09.541 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:06:09.541 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:06:09.541 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:09.541 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:09.541 element at address: 0x200027e65f80 with size: 0.000183 MiB 00:06:09.541 element at address: 0x200027e66040 with size: 0.000183 MiB 00:06:09.541 element at address: 0x200027e6cc40 with size: 0.000183 MiB 00:06:09.541 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:06:09.541 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:06:09.541 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:06:09.541 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:06:09.541 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:06:09.541 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:06:09.541 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:06:09.541 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:06:09.541 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:06:09.541 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:06:09.541 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:06:09.541 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:06:09.541 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:06:09.541 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:06:09.541 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:06:09.541 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:06:09.541 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:06:09.541 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:06:09.541 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:06:09.541 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:06:09.541 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:06:09.541 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:06:09.541 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:06:09.541 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:06:09.541 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:06:09.541 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:06:09.541 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:06:09.541 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:06:09.541 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:06:09.541 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:06:09.541 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:06:09.541 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:06:09.541 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:06:09.541 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:06:09.541 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:06:09.541 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:06:09.541 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:06:09.541 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:06:09.541 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:06:09.541 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:06:09.541 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:06:09.541 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:06:09.541 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:06:09.541 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:06:09.541 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:06:09.541 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:06:09.541 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:06:09.541 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:06:09.541 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:06:09.541 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:06:09.541 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:06:09.541 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:06:09.541 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:06:09.541 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:06:09.541 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:06:09.541 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:06:09.541 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:06:09.541 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:06:09.541 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:06:09.541 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:06:09.541 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:06:09.541 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:06:09.541 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:06:09.541 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:06:09.541 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:09.541 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:09.541 list of memzone associated elements. size: 602.262573 MiB 00:06:09.541 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:09.541 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:09.541 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:09.541 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:09.541 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:09.541 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_74222_0 00:06:09.541 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:09.541 associated memzone info: size: 48.002930 MiB name: MP_evtpool_74222_0 00:06:09.541 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:09.541 associated memzone info: size: 48.002930 MiB name: MP_msgpool_74222_0 00:06:09.541 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:09.541 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:09.541 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:09.541 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:09.541 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:09.541 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_74222 00:06:09.541 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:09.541 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_74222 00:06:09.541 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:09.541 associated memzone info: size: 1.007996 MiB name: MP_evtpool_74222 00:06:09.541 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:09.541 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:09.541 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:09.541 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:09.541 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:09.541 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:09.541 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:09.541 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:09.541 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:09.541 associated memzone info: size: 1.000366 MiB name: RG_ring_0_74222 00:06:09.541 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:09.541 associated memzone info: size: 1.000366 MiB name: RG_ring_1_74222 00:06:09.541 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:09.541 associated memzone info: size: 1.000366 MiB name: RG_ring_4_74222 00:06:09.541 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:09.541 associated memzone info: size: 1.000366 MiB name: RG_ring_5_74222 00:06:09.541 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:09.541 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_74222 00:06:09.541 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:09.541 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:09.541 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:09.541 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:09.541 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:09.541 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:09.541 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:09.541 associated memzone info: size: 0.125366 MiB name: RG_ring_2_74222 00:06:09.541 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:09.541 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:09.541 element at address: 0x200027e66100 with size: 0.023743 MiB 00:06:09.541 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:09.541 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:09.541 associated memzone info: size: 0.015991 MiB name: RG_ring_3_74222 00:06:09.541 element at address: 0x200027e6c240 with size: 0.002441 MiB 00:06:09.541 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:09.541 element at address: 0x2000002d7080 with size: 0.000305 MiB 00:06:09.541 associated memzone info: size: 0.000183 MiB name: MP_msgpool_74222 00:06:09.541 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:09.541 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_74222 00:06:09.541 element at address: 0x200027e6cd00 with size: 0.000305 MiB 00:06:09.541 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:09.541 20:05:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:09.541 20:05:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 74222 00:06:09.541 20:05:58 dpdk_mem_utility -- common/autotest_common.sh@946 -- # '[' -z 74222 ']' 00:06:09.541 20:05:58 dpdk_mem_utility -- common/autotest_common.sh@950 -- # kill -0 74222 00:06:09.541 20:05:58 dpdk_mem_utility -- common/autotest_common.sh@951 -- # uname 00:06:09.541 20:05:58 dpdk_mem_utility -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:09.541 20:05:58 dpdk_mem_utility -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74222 00:06:09.541 20:05:58 dpdk_mem_utility -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:09.541 20:05:58 dpdk_mem_utility -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:09.541 killing process with pid 74222 00:06:09.541 20:05:58 dpdk_mem_utility -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74222' 00:06:09.541 20:05:58 dpdk_mem_utility -- common/autotest_common.sh@965 -- # kill 74222 00:06:09.541 20:05:58 dpdk_mem_utility -- common/autotest_common.sh@970 -- # wait 74222 00:06:09.800 00:06:09.800 real 0m1.693s 00:06:09.800 user 0m1.850s 00:06:09.800 sys 0m0.467s 00:06:09.800 20:05:58 dpdk_mem_utility -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:09.800 20:05:58 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:09.800 ************************************ 00:06:09.800 END TEST dpdk_mem_utility 00:06:09.800 ************************************ 00:06:09.800 20:05:58 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:09.800 20:05:58 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:09.800 20:05:58 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:09.800 20:05:58 -- common/autotest_common.sh@10 -- # set +x 00:06:09.800 ************************************ 00:06:09.800 START TEST event 00:06:09.800 ************************************ 00:06:09.800 20:05:58 event -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:10.059 * Looking for test storage... 00:06:10.059 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:10.059 20:05:58 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:10.059 20:05:58 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:10.059 20:05:58 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:10.059 20:05:58 event -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:06:10.059 20:05:58 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:10.059 20:05:58 event -- common/autotest_common.sh@10 -- # set +x 00:06:10.059 ************************************ 00:06:10.059 START TEST event_perf 00:06:10.059 ************************************ 00:06:10.059 20:05:58 event.event_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:10.059 Running I/O for 1 seconds...[2024-07-14 20:05:58.957915] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:10.059 [2024-07-14 20:05:58.958007] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74316 ] 00:06:10.059 [2024-07-14 20:05:59.087973] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:10.318 [2024-07-14 20:05:59.144940] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:10.318 [2024-07-14 20:05:59.145104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:10.318 [2024-07-14 20:05:59.145180] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:10.318 Running I/O for 1 seconds...[2024-07-14 20:05:59.145553] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.261 00:06:11.261 lcore 0: 119298 00:06:11.261 lcore 1: 119300 00:06:11.261 lcore 2: 119301 00:06:11.261 lcore 3: 119297 00:06:11.261 done. 00:06:11.261 00:06:11.261 real 0m1.271s 00:06:11.261 user 0m4.089s 00:06:11.261 sys 0m0.054s 00:06:11.261 20:06:00 event.event_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:11.261 20:06:00 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:11.261 ************************************ 00:06:11.261 END TEST event_perf 00:06:11.261 ************************************ 00:06:11.261 20:06:00 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:11.261 20:06:00 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:11.261 20:06:00 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:11.261 20:06:00 event -- common/autotest_common.sh@10 -- # set +x 00:06:11.261 ************************************ 00:06:11.261 START TEST event_reactor 00:06:11.261 ************************************ 00:06:11.261 20:06:00 event.event_reactor -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:11.261 [2024-07-14 20:06:00.282697] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:11.261 [2024-07-14 20:06:00.282826] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74350 ] 00:06:11.534 [2024-07-14 20:06:00.418387] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.534 [2024-07-14 20:06:00.481267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.910 test_start 00:06:12.910 oneshot 00:06:12.910 tick 100 00:06:12.910 tick 100 00:06:12.910 tick 250 00:06:12.910 tick 100 00:06:12.910 tick 100 00:06:12.910 tick 250 00:06:12.910 tick 100 00:06:12.910 tick 500 00:06:12.910 tick 100 00:06:12.910 tick 100 00:06:12.910 tick 250 00:06:12.910 tick 100 00:06:12.910 tick 100 00:06:12.910 test_end 00:06:12.910 00:06:12.910 real 0m1.294s 00:06:12.910 user 0m1.132s 00:06:12.910 sys 0m0.057s 00:06:12.910 20:06:01 event.event_reactor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:12.910 20:06:01 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:12.910 ************************************ 00:06:12.910 END TEST event_reactor 00:06:12.910 ************************************ 00:06:12.910 20:06:01 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:12.910 20:06:01 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:12.910 20:06:01 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:12.910 20:06:01 event -- common/autotest_common.sh@10 -- # set +x 00:06:12.910 ************************************ 00:06:12.910 START TEST event_reactor_perf 00:06:12.910 ************************************ 00:06:12.910 20:06:01 event.event_reactor_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:12.910 [2024-07-14 20:06:01.630635] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:12.910 [2024-07-14 20:06:01.630739] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74380 ] 00:06:12.910 [2024-07-14 20:06:01.753574] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.910 [2024-07-14 20:06:01.816576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.846 test_start 00:06:13.846 test_end 00:06:13.846 Performance: 430534 events per second 00:06:13.846 00:06:13.846 real 0m1.263s 00:06:13.846 user 0m1.111s 00:06:13.846 sys 0m0.047s 00:06:13.846 20:06:02 event.event_reactor_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:13.846 20:06:02 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:13.846 ************************************ 00:06:13.846 END TEST event_reactor_perf 00:06:13.846 ************************************ 00:06:13.846 20:06:02 event -- event/event.sh@49 -- # uname -s 00:06:13.846 20:06:02 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:13.846 20:06:02 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:13.846 20:06:02 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:13.846 20:06:02 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:13.846 20:06:02 event -- common/autotest_common.sh@10 -- # set +x 00:06:14.105 ************************************ 00:06:14.105 START TEST event_scheduler 00:06:14.105 ************************************ 00:06:14.105 20:06:02 event.event_scheduler -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:14.105 * Looking for test storage... 00:06:14.105 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:14.105 20:06:03 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:14.105 20:06:03 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=74442 00:06:14.105 20:06:03 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:14.105 20:06:03 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:14.105 20:06:03 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 74442 00:06:14.105 20:06:03 event.event_scheduler -- common/autotest_common.sh@827 -- # '[' -z 74442 ']' 00:06:14.105 20:06:03 event.event_scheduler -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.105 20:06:03 event.event_scheduler -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:14.105 20:06:03 event.event_scheduler -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.105 20:06:03 event.event_scheduler -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:14.105 20:06:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:14.105 [2024-07-14 20:06:03.073921] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:14.105 [2024-07-14 20:06:03.074021] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74442 ] 00:06:14.362 [2024-07-14 20:06:03.215628] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:14.362 [2024-07-14 20:06:03.360128] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.362 [2024-07-14 20:06:03.360305] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:14.362 [2024-07-14 20:06:03.360429] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:14.362 [2024-07-14 20:06:03.361118] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:15.298 20:06:04 event.event_scheduler -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:15.298 20:06:04 event.event_scheduler -- common/autotest_common.sh@860 -- # return 0 00:06:15.298 20:06:04 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:15.298 20:06:04 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.298 20:06:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:15.298 POWER: Env isn't set yet! 00:06:15.298 POWER: Attempting to initialise ACPI cpufreq power management... 00:06:15.298 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:15.298 POWER: Cannot set governor of lcore 0 to userspace 00:06:15.298 POWER: Attempting to initialise PSTAT power management... 00:06:15.298 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:15.298 POWER: Cannot set governor of lcore 0 to performance 00:06:15.298 POWER: Attempting to initialise AMD PSTATE power management... 00:06:15.298 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:15.298 POWER: Cannot set governor of lcore 0 to userspace 00:06:15.298 POWER: Attempting to initialise CPPC power management... 00:06:15.298 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:15.298 POWER: Cannot set governor of lcore 0 to userspace 00:06:15.298 POWER: Attempting to initialise VM power management... 00:06:15.298 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:15.298 POWER: Unable to set Power Management Environment for lcore 0 00:06:15.298 [2024-07-14 20:06:04.063702] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:06:15.298 [2024-07-14 20:06:04.063770] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:06:15.298 [2024-07-14 20:06:04.063823] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:06:15.298 [2024-07-14 20:06:04.063910] scheduler_dynamic.c: 382:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:15.298 [2024-07-14 20:06:04.063971] scheduler_dynamic.c: 384:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:15.298 [2024-07-14 20:06:04.064027] scheduler_dynamic.c: 386:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:15.298 20:06:04 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.298 20:06:04 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:15.298 20:06:04 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.298 20:06:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:15.298 [2024-07-14 20:06:04.187794] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:15.298 20:06:04 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.298 20:06:04 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:15.298 20:06:04 event.event_scheduler -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:15.298 20:06:04 event.event_scheduler -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:15.298 20:06:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:15.298 ************************************ 00:06:15.298 START TEST scheduler_create_thread 00:06:15.298 ************************************ 00:06:15.298 20:06:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1121 -- # scheduler_create_thread 00:06:15.298 20:06:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:15.298 20:06:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.298 20:06:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.298 2 00:06:15.298 20:06:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.298 20:06:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:15.298 20:06:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.298 20:06:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.298 3 00:06:15.298 20:06:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.299 20:06:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:15.299 20:06:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.299 20:06:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.299 4 00:06:15.299 20:06:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.299 20:06:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:15.299 20:06:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.299 20:06:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.299 5 00:06:15.299 20:06:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.299 20:06:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:15.299 20:06:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.299 20:06:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.299 6 00:06:15.299 20:06:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.299 20:06:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:15.299 20:06:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.299 20:06:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.299 7 00:06:15.299 20:06:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.299 20:06:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:15.299 20:06:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.299 20:06:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.299 8 00:06:15.299 20:06:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.299 20:06:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:15.299 20:06:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.299 20:06:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.299 9 00:06:15.299 20:06:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.299 20:06:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:15.299 20:06:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.299 20:06:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.299 10 00:06:15.299 20:06:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.299 20:06:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:15.299 20:06:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.299 20:06:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.299 20:06:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.299 20:06:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:15.299 20:06:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:15.299 20:06:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.299 20:06:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.299 20:06:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.299 20:06:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:15.299 20:06:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.299 20:06:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:16.675 20:06:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.675 20:06:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:16.675 20:06:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:16.675 20:06:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.675 20:06:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.052 20:06:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.052 00:06:18.052 real 0m2.611s 00:06:18.052 user 0m0.021s 00:06:18.052 sys 0m0.004s 00:06:18.052 20:06:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:18.052 ************************************ 00:06:18.052 END TEST scheduler_create_thread 00:06:18.052 ************************************ 00:06:18.052 20:06:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.052 20:06:06 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:18.052 20:06:06 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 74442 00:06:18.052 20:06:06 event.event_scheduler -- common/autotest_common.sh@946 -- # '[' -z 74442 ']' 00:06:18.052 20:06:06 event.event_scheduler -- common/autotest_common.sh@950 -- # kill -0 74442 00:06:18.052 20:06:06 event.event_scheduler -- common/autotest_common.sh@951 -- # uname 00:06:18.052 20:06:06 event.event_scheduler -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:18.052 20:06:06 event.event_scheduler -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74442 00:06:18.052 20:06:06 event.event_scheduler -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:06:18.052 20:06:06 event.event_scheduler -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:06:18.052 killing process with pid 74442 00:06:18.052 20:06:06 event.event_scheduler -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74442' 00:06:18.052 20:06:06 event.event_scheduler -- common/autotest_common.sh@965 -- # kill 74442 00:06:18.052 20:06:06 event.event_scheduler -- common/autotest_common.sh@970 -- # wait 74442 00:06:18.311 [2024-07-14 20:06:07.289423] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:18.570 00:06:18.570 real 0m4.573s 00:06:18.570 user 0m8.475s 00:06:18.570 sys 0m0.431s 00:06:18.570 20:06:07 event.event_scheduler -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:18.570 20:06:07 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:18.570 ************************************ 00:06:18.570 END TEST event_scheduler 00:06:18.570 ************************************ 00:06:18.570 20:06:07 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:18.570 20:06:07 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:18.570 20:06:07 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:18.570 20:06:07 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:18.570 20:06:07 event -- common/autotest_common.sh@10 -- # set +x 00:06:18.570 ************************************ 00:06:18.570 START TEST app_repeat 00:06:18.570 ************************************ 00:06:18.570 20:06:07 event.app_repeat -- common/autotest_common.sh@1121 -- # app_repeat_test 00:06:18.570 20:06:07 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.570 20:06:07 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.570 20:06:07 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:18.570 20:06:07 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:18.570 20:06:07 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:18.570 20:06:07 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:18.570 20:06:07 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:18.570 20:06:07 event.app_repeat -- event/event.sh@19 -- # repeat_pid=74559 00:06:18.570 20:06:07 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:18.570 20:06:07 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:18.570 20:06:07 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 74559' 00:06:18.570 Process app_repeat pid: 74559 00:06:18.570 20:06:07 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:18.570 spdk_app_start Round 0 00:06:18.570 20:06:07 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:18.570 20:06:07 event.app_repeat -- event/event.sh@25 -- # waitforlisten 74559 /var/tmp/spdk-nbd.sock 00:06:18.570 20:06:07 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 74559 ']' 00:06:18.570 20:06:07 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:18.570 20:06:07 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:18.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:18.570 20:06:07 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:18.570 20:06:07 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:18.570 20:06:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:18.570 [2024-07-14 20:06:07.593077] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:18.570 [2024-07-14 20:06:07.593180] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74559 ] 00:06:18.828 [2024-07-14 20:06:07.731437] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:18.828 [2024-07-14 20:06:07.827238] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:18.828 [2024-07-14 20:06:07.827264] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.762 20:06:08 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:19.762 20:06:08 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:19.762 20:06:08 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:20.021 Malloc0 00:06:20.021 20:06:08 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:20.280 Malloc1 00:06:20.280 20:06:09 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:20.280 20:06:09 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.280 20:06:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:20.280 20:06:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:20.280 20:06:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.280 20:06:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:20.280 20:06:09 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:20.280 20:06:09 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.280 20:06:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:20.280 20:06:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:20.280 20:06:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.280 20:06:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:20.280 20:06:09 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:20.280 20:06:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:20.280 20:06:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:20.280 20:06:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:20.538 /dev/nbd0 00:06:20.538 20:06:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:20.538 20:06:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:20.538 20:06:09 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:20.538 20:06:09 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:20.538 20:06:09 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:20.539 20:06:09 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:20.539 20:06:09 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:20.539 20:06:09 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:20.539 20:06:09 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:20.539 20:06:09 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:20.539 20:06:09 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:20.539 1+0 records in 00:06:20.539 1+0 records out 00:06:20.539 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000284893 s, 14.4 MB/s 00:06:20.539 20:06:09 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:20.539 20:06:09 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:20.539 20:06:09 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:20.539 20:06:09 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:20.539 20:06:09 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:20.539 20:06:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:20.539 20:06:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:20.539 20:06:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:20.797 /dev/nbd1 00:06:20.797 20:06:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:20.797 20:06:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:20.797 20:06:09 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:20.797 20:06:09 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:20.797 20:06:09 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:20.797 20:06:09 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:20.797 20:06:09 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:20.797 20:06:09 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:20.797 20:06:09 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:20.797 20:06:09 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:20.797 20:06:09 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:20.797 1+0 records in 00:06:20.797 1+0 records out 00:06:20.797 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000218559 s, 18.7 MB/s 00:06:20.797 20:06:09 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:20.797 20:06:09 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:20.797 20:06:09 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:20.797 20:06:09 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:20.797 20:06:09 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:20.797 20:06:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:20.797 20:06:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:20.797 20:06:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:20.797 20:06:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.797 20:06:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:21.055 20:06:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:21.055 { 00:06:21.055 "bdev_name": "Malloc0", 00:06:21.055 "nbd_device": "/dev/nbd0" 00:06:21.055 }, 00:06:21.055 { 00:06:21.055 "bdev_name": "Malloc1", 00:06:21.055 "nbd_device": "/dev/nbd1" 00:06:21.055 } 00:06:21.055 ]' 00:06:21.055 20:06:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:21.055 20:06:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:21.055 { 00:06:21.055 "bdev_name": "Malloc0", 00:06:21.055 "nbd_device": "/dev/nbd0" 00:06:21.055 }, 00:06:21.055 { 00:06:21.055 "bdev_name": "Malloc1", 00:06:21.055 "nbd_device": "/dev/nbd1" 00:06:21.055 } 00:06:21.055 ]' 00:06:21.055 20:06:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:21.055 /dev/nbd1' 00:06:21.055 20:06:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:21.055 /dev/nbd1' 00:06:21.055 20:06:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:21.055 20:06:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:21.055 20:06:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:21.055 20:06:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:21.055 20:06:10 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:21.055 20:06:10 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:21.055 20:06:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.055 20:06:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:21.055 20:06:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:21.055 20:06:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:21.055 20:06:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:21.055 20:06:10 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:21.055 256+0 records in 00:06:21.055 256+0 records out 00:06:21.055 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00812138 s, 129 MB/s 00:06:21.055 20:06:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:21.055 20:06:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:21.055 256+0 records in 00:06:21.055 256+0 records out 00:06:21.055 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0252303 s, 41.6 MB/s 00:06:21.055 20:06:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:21.055 20:06:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:21.055 256+0 records in 00:06:21.055 256+0 records out 00:06:21.055 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.026575 s, 39.5 MB/s 00:06:21.055 20:06:10 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:21.055 20:06:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.055 20:06:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:21.055 20:06:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:21.055 20:06:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:21.055 20:06:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:21.055 20:06:10 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:21.055 20:06:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:21.055 20:06:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:21.055 20:06:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:21.055 20:06:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:21.055 20:06:10 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:21.055 20:06:10 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:21.055 20:06:10 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.055 20:06:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.055 20:06:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:21.056 20:06:10 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:21.056 20:06:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:21.056 20:06:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:21.313 20:06:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:21.313 20:06:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:21.313 20:06:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:21.313 20:06:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:21.313 20:06:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:21.313 20:06:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:21.313 20:06:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:21.313 20:06:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:21.313 20:06:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:21.313 20:06:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:21.572 20:06:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:21.572 20:06:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:21.572 20:06:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:21.572 20:06:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:21.572 20:06:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:21.572 20:06:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:21.572 20:06:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:21.572 20:06:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:21.572 20:06:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:21.572 20:06:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.572 20:06:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:21.832 20:06:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:21.832 20:06:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:21.832 20:06:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:21.832 20:06:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:21.832 20:06:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:21.832 20:06:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:21.832 20:06:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:21.832 20:06:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:21.832 20:06:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:21.832 20:06:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:21.832 20:06:10 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:21.832 20:06:10 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:21.832 20:06:10 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:22.428 20:06:11 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:22.428 [2024-07-14 20:06:11.384111] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:22.428 [2024-07-14 20:06:11.449872] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:22.428 [2024-07-14 20:06:11.449881] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.428 [2024-07-14 20:06:11.508152] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:22.428 [2024-07-14 20:06:11.508230] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:25.709 20:06:14 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:25.709 spdk_app_start Round 1 00:06:25.709 20:06:14 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:25.709 20:06:14 event.app_repeat -- event/event.sh@25 -- # waitforlisten 74559 /var/tmp/spdk-nbd.sock 00:06:25.709 20:06:14 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 74559 ']' 00:06:25.709 20:06:14 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:25.709 20:06:14 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:25.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:25.709 20:06:14 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:25.709 20:06:14 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:25.709 20:06:14 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:25.709 20:06:14 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:25.709 20:06:14 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:25.709 20:06:14 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:25.709 Malloc0 00:06:25.709 20:06:14 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:25.967 Malloc1 00:06:26.226 20:06:15 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:26.226 20:06:15 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.226 20:06:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:26.226 20:06:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:26.226 20:06:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.226 20:06:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:26.226 20:06:15 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:26.226 20:06:15 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.226 20:06:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:26.226 20:06:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:26.226 20:06:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.226 20:06:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:26.226 20:06:15 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:26.226 20:06:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:26.226 20:06:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:26.226 20:06:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:26.226 /dev/nbd0 00:06:26.484 20:06:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:26.484 20:06:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:26.484 20:06:15 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:26.484 20:06:15 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:26.484 20:06:15 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:26.484 20:06:15 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:26.484 20:06:15 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:26.484 20:06:15 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:26.484 20:06:15 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:26.484 20:06:15 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:26.484 20:06:15 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:26.484 1+0 records in 00:06:26.484 1+0 records out 00:06:26.484 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000279606 s, 14.6 MB/s 00:06:26.484 20:06:15 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:26.484 20:06:15 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:26.484 20:06:15 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:26.484 20:06:15 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:26.484 20:06:15 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:26.484 20:06:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:26.484 20:06:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:26.484 20:06:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:26.484 /dev/nbd1 00:06:26.743 20:06:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:26.743 20:06:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:26.743 20:06:15 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:26.743 20:06:15 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:26.743 20:06:15 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:26.743 20:06:15 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:26.743 20:06:15 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:26.743 20:06:15 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:26.743 20:06:15 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:26.743 20:06:15 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:26.743 20:06:15 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:26.743 1+0 records in 00:06:26.743 1+0 records out 00:06:26.743 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000300834 s, 13.6 MB/s 00:06:26.743 20:06:15 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:26.743 20:06:15 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:26.743 20:06:15 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:26.743 20:06:15 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:26.743 20:06:15 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:26.743 20:06:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:26.743 20:06:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:26.743 20:06:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:26.743 20:06:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.743 20:06:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:27.002 20:06:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:27.002 { 00:06:27.002 "bdev_name": "Malloc0", 00:06:27.002 "nbd_device": "/dev/nbd0" 00:06:27.002 }, 00:06:27.002 { 00:06:27.002 "bdev_name": "Malloc1", 00:06:27.002 "nbd_device": "/dev/nbd1" 00:06:27.002 } 00:06:27.002 ]' 00:06:27.002 20:06:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:27.002 { 00:06:27.002 "bdev_name": "Malloc0", 00:06:27.002 "nbd_device": "/dev/nbd0" 00:06:27.002 }, 00:06:27.002 { 00:06:27.002 "bdev_name": "Malloc1", 00:06:27.002 "nbd_device": "/dev/nbd1" 00:06:27.002 } 00:06:27.002 ]' 00:06:27.002 20:06:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:27.002 20:06:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:27.002 /dev/nbd1' 00:06:27.002 20:06:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:27.002 /dev/nbd1' 00:06:27.002 20:06:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:27.002 20:06:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:27.002 20:06:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:27.002 20:06:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:27.002 20:06:15 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:27.002 20:06:15 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:27.002 20:06:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.002 20:06:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:27.002 20:06:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:27.002 20:06:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:27.002 20:06:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:27.002 20:06:15 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:27.002 256+0 records in 00:06:27.002 256+0 records out 00:06:27.002 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00762573 s, 138 MB/s 00:06:27.002 20:06:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:27.002 20:06:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:27.002 256+0 records in 00:06:27.002 256+0 records out 00:06:27.002 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0243857 s, 43.0 MB/s 00:06:27.002 20:06:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:27.002 20:06:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:27.002 256+0 records in 00:06:27.002 256+0 records out 00:06:27.002 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0266981 s, 39.3 MB/s 00:06:27.002 20:06:16 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:27.002 20:06:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.002 20:06:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:27.002 20:06:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:27.002 20:06:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:27.002 20:06:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:27.002 20:06:16 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:27.002 20:06:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:27.002 20:06:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:27.002 20:06:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:27.002 20:06:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:27.002 20:06:16 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:27.002 20:06:16 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:27.002 20:06:16 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.002 20:06:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.002 20:06:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:27.002 20:06:16 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:27.002 20:06:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:27.002 20:06:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:27.260 20:06:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:27.260 20:06:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:27.260 20:06:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:27.260 20:06:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:27.260 20:06:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:27.260 20:06:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:27.260 20:06:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:27.260 20:06:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:27.260 20:06:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:27.260 20:06:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:27.518 20:06:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:27.518 20:06:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:27.518 20:06:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:27.518 20:06:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:27.518 20:06:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:27.518 20:06:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:27.518 20:06:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:27.518 20:06:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:27.518 20:06:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:27.518 20:06:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.518 20:06:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:27.777 20:06:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:27.777 20:06:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:27.777 20:06:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:28.036 20:06:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:28.036 20:06:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:28.036 20:06:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:28.036 20:06:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:28.036 20:06:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:28.036 20:06:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:28.036 20:06:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:28.036 20:06:16 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:28.036 20:06:16 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:28.036 20:06:16 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:28.294 20:06:17 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:28.294 [2024-07-14 20:06:17.316470] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:28.553 [2024-07-14 20:06:17.381025] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:28.553 [2024-07-14 20:06:17.381037] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.553 [2024-07-14 20:06:17.438864] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:28.553 [2024-07-14 20:06:17.438950] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:31.085 spdk_app_start Round 2 00:06:31.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:31.085 20:06:20 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:31.085 20:06:20 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:31.085 20:06:20 event.app_repeat -- event/event.sh@25 -- # waitforlisten 74559 /var/tmp/spdk-nbd.sock 00:06:31.085 20:06:20 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 74559 ']' 00:06:31.085 20:06:20 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:31.085 20:06:20 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:31.085 20:06:20 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:31.085 20:06:20 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:31.085 20:06:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:31.343 20:06:20 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:31.343 20:06:20 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:31.343 20:06:20 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:31.601 Malloc0 00:06:31.859 20:06:20 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:31.859 Malloc1 00:06:31.859 20:06:20 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:31.859 20:06:20 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.859 20:06:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:31.860 20:06:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:31.860 20:06:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:31.860 20:06:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:31.860 20:06:20 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:31.860 20:06:20 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.860 20:06:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:31.860 20:06:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:31.860 20:06:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:31.860 20:06:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:31.860 20:06:20 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:31.860 20:06:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:31.860 20:06:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:31.860 20:06:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:32.120 /dev/nbd0 00:06:32.379 20:06:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:32.379 20:06:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:32.379 20:06:21 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:32.379 20:06:21 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:32.379 20:06:21 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:32.379 20:06:21 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:32.379 20:06:21 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:32.379 20:06:21 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:32.379 20:06:21 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:32.379 20:06:21 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:32.379 20:06:21 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:32.379 1+0 records in 00:06:32.379 1+0 records out 00:06:32.379 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000426792 s, 9.6 MB/s 00:06:32.379 20:06:21 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:32.379 20:06:21 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:32.379 20:06:21 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:32.379 20:06:21 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:32.379 20:06:21 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:32.379 20:06:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:32.379 20:06:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:32.379 20:06:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:32.638 /dev/nbd1 00:06:32.638 20:06:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:32.638 20:06:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:32.638 20:06:21 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:32.638 20:06:21 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:32.638 20:06:21 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:32.638 20:06:21 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:32.638 20:06:21 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:32.638 20:06:21 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:32.638 20:06:21 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:32.638 20:06:21 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:32.638 20:06:21 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:32.638 1+0 records in 00:06:32.638 1+0 records out 00:06:32.638 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000291709 s, 14.0 MB/s 00:06:32.638 20:06:21 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:32.638 20:06:21 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:32.638 20:06:21 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:32.638 20:06:21 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:32.638 20:06:21 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:32.638 20:06:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:32.638 20:06:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:32.638 20:06:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:32.638 20:06:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.638 20:06:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:32.897 20:06:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:32.897 { 00:06:32.897 "bdev_name": "Malloc0", 00:06:32.897 "nbd_device": "/dev/nbd0" 00:06:32.897 }, 00:06:32.897 { 00:06:32.897 "bdev_name": "Malloc1", 00:06:32.897 "nbd_device": "/dev/nbd1" 00:06:32.897 } 00:06:32.898 ]' 00:06:32.898 20:06:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:32.898 { 00:06:32.898 "bdev_name": "Malloc0", 00:06:32.898 "nbd_device": "/dev/nbd0" 00:06:32.898 }, 00:06:32.898 { 00:06:32.898 "bdev_name": "Malloc1", 00:06:32.898 "nbd_device": "/dev/nbd1" 00:06:32.898 } 00:06:32.898 ]' 00:06:32.898 20:06:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:32.898 20:06:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:32.898 /dev/nbd1' 00:06:32.898 20:06:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:32.898 /dev/nbd1' 00:06:32.898 20:06:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:32.898 20:06:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:32.898 20:06:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:32.898 20:06:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:32.898 20:06:21 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:32.898 20:06:21 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:32.898 20:06:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.898 20:06:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:32.898 20:06:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:32.898 20:06:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:32.898 20:06:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:32.898 20:06:21 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:32.898 256+0 records in 00:06:32.898 256+0 records out 00:06:32.898 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00624625 s, 168 MB/s 00:06:32.898 20:06:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:32.898 20:06:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:32.898 256+0 records in 00:06:32.898 256+0 records out 00:06:32.898 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0270946 s, 38.7 MB/s 00:06:32.898 20:06:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:32.898 20:06:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:32.898 256+0 records in 00:06:32.898 256+0 records out 00:06:32.898 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.027007 s, 38.8 MB/s 00:06:32.898 20:06:21 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:32.898 20:06:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.898 20:06:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:32.898 20:06:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:32.898 20:06:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:32.898 20:06:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:32.898 20:06:21 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:32.898 20:06:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:32.898 20:06:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:32.898 20:06:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:32.898 20:06:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:32.898 20:06:21 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:32.898 20:06:21 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:32.898 20:06:21 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.898 20:06:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.898 20:06:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:32.898 20:06:21 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:32.898 20:06:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:32.898 20:06:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:33.156 20:06:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:33.156 20:06:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:33.156 20:06:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:33.156 20:06:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:33.156 20:06:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:33.156 20:06:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:33.156 20:06:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:33.156 20:06:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:33.156 20:06:22 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:33.156 20:06:22 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:33.415 20:06:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:33.415 20:06:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:33.415 20:06:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:33.415 20:06:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:33.415 20:06:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:33.415 20:06:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:33.415 20:06:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:33.415 20:06:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:33.415 20:06:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:33.415 20:06:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.415 20:06:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:33.674 20:06:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:33.674 20:06:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:33.674 20:06:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:33.674 20:06:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:33.674 20:06:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:33.674 20:06:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:33.674 20:06:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:33.674 20:06:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:33.674 20:06:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:33.674 20:06:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:33.674 20:06:22 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:33.674 20:06:22 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:33.674 20:06:22 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:33.933 20:06:22 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:34.191 [2024-07-14 20:06:23.140471] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:34.191 [2024-07-14 20:06:23.189789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:34.191 [2024-07-14 20:06:23.189799] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.191 [2024-07-14 20:06:23.247751] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:34.191 [2024-07-14 20:06:23.247802] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:37.476 20:06:25 event.app_repeat -- event/event.sh@38 -- # waitforlisten 74559 /var/tmp/spdk-nbd.sock 00:06:37.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:37.476 20:06:25 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 74559 ']' 00:06:37.476 20:06:25 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:37.476 20:06:25 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:37.476 20:06:25 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:37.476 20:06:25 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:37.476 20:06:25 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:37.476 20:06:26 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:37.476 20:06:26 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:37.476 20:06:26 event.app_repeat -- event/event.sh@39 -- # killprocess 74559 00:06:37.476 20:06:26 event.app_repeat -- common/autotest_common.sh@946 -- # '[' -z 74559 ']' 00:06:37.476 20:06:26 event.app_repeat -- common/autotest_common.sh@950 -- # kill -0 74559 00:06:37.476 20:06:26 event.app_repeat -- common/autotest_common.sh@951 -- # uname 00:06:37.476 20:06:26 event.app_repeat -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:37.476 20:06:26 event.app_repeat -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74559 00:06:37.476 20:06:26 event.app_repeat -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:37.476 killing process with pid 74559 00:06:37.476 20:06:26 event.app_repeat -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:37.476 20:06:26 event.app_repeat -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74559' 00:06:37.476 20:06:26 event.app_repeat -- common/autotest_common.sh@965 -- # kill 74559 00:06:37.476 20:06:26 event.app_repeat -- common/autotest_common.sh@970 -- # wait 74559 00:06:37.476 spdk_app_start is called in Round 0. 00:06:37.476 Shutdown signal received, stop current app iteration 00:06:37.476 Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 reinitialization... 00:06:37.476 spdk_app_start is called in Round 1. 00:06:37.476 Shutdown signal received, stop current app iteration 00:06:37.476 Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 reinitialization... 00:06:37.476 spdk_app_start is called in Round 2. 00:06:37.476 Shutdown signal received, stop current app iteration 00:06:37.476 Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 reinitialization... 00:06:37.476 spdk_app_start is called in Round 3. 00:06:37.476 Shutdown signal received, stop current app iteration 00:06:37.476 20:06:26 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:37.476 20:06:26 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:37.476 00:06:37.476 real 0m18.887s 00:06:37.476 user 0m42.390s 00:06:37.476 sys 0m2.983s 00:06:37.476 20:06:26 event.app_repeat -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:37.477 20:06:26 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:37.477 ************************************ 00:06:37.477 END TEST app_repeat 00:06:37.477 ************************************ 00:06:37.477 20:06:26 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:37.477 20:06:26 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:37.477 20:06:26 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:37.477 20:06:26 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:37.477 20:06:26 event -- common/autotest_common.sh@10 -- # set +x 00:06:37.477 ************************************ 00:06:37.477 START TEST cpu_locks 00:06:37.477 ************************************ 00:06:37.477 20:06:26 event.cpu_locks -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:37.735 * Looking for test storage... 00:06:37.735 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:37.735 20:06:26 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:37.735 20:06:26 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:37.735 20:06:26 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:37.735 20:06:26 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:37.735 20:06:26 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:37.735 20:06:26 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:37.735 20:06:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:37.735 ************************************ 00:06:37.735 START TEST default_locks 00:06:37.735 ************************************ 00:06:37.735 20:06:26 event.cpu_locks.default_locks -- common/autotest_common.sh@1121 -- # default_locks 00:06:37.735 20:06:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=75188 00:06:37.735 20:06:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 75188 00:06:37.735 20:06:26 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 75188 ']' 00:06:37.735 20:06:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:37.735 20:06:26 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.735 20:06:26 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:37.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.735 20:06:26 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.735 20:06:26 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:37.735 20:06:26 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:37.735 [2024-07-14 20:06:26.663592] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:37.735 [2024-07-14 20:06:26.663705] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75188 ] 00:06:37.735 [2024-07-14 20:06:26.803671] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.993 [2024-07-14 20:06:26.900301] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.927 20:06:27 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:38.927 20:06:27 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 0 00:06:38.927 20:06:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 75188 00:06:38.927 20:06:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 75188 00:06:38.927 20:06:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:39.186 20:06:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 75188 00:06:39.186 20:06:28 event.cpu_locks.default_locks -- common/autotest_common.sh@946 -- # '[' -z 75188 ']' 00:06:39.186 20:06:28 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # kill -0 75188 00:06:39.186 20:06:28 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # uname 00:06:39.186 20:06:28 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:39.186 20:06:28 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75188 00:06:39.186 20:06:28 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:39.186 20:06:28 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:39.186 killing process with pid 75188 00:06:39.186 20:06:28 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75188' 00:06:39.186 20:06:28 event.cpu_locks.default_locks -- common/autotest_common.sh@965 -- # kill 75188 00:06:39.186 20:06:28 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # wait 75188 00:06:39.752 20:06:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 75188 00:06:39.752 20:06:28 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:39.752 20:06:28 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 75188 00:06:39.752 20:06:28 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:39.752 20:06:28 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:39.752 20:06:28 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:39.752 20:06:28 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:39.752 20:06:28 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 75188 00:06:39.752 20:06:28 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 75188 ']' 00:06:39.752 20:06:28 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.752 20:06:28 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:39.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.752 20:06:28 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.752 20:06:28 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:39.752 20:06:28 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:39.752 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 842: kill: (75188) - No such process 00:06:39.752 ERROR: process (pid: 75188) is no longer running 00:06:39.752 20:06:28 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:39.752 20:06:28 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 1 00:06:39.752 20:06:28 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:39.752 20:06:28 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:39.753 20:06:28 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:39.753 20:06:28 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:39.753 20:06:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:39.753 20:06:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:39.753 20:06:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:39.753 20:06:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:39.753 00:06:39.753 real 0m1.991s 00:06:39.753 user 0m2.144s 00:06:39.753 sys 0m0.588s 00:06:39.753 20:06:28 event.cpu_locks.default_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:39.753 20:06:28 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:39.753 ************************************ 00:06:39.753 END TEST default_locks 00:06:39.753 ************************************ 00:06:39.753 20:06:28 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:39.753 20:06:28 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:39.753 20:06:28 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:39.753 20:06:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:39.753 ************************************ 00:06:39.753 START TEST default_locks_via_rpc 00:06:39.753 ************************************ 00:06:39.753 20:06:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1121 -- # default_locks_via_rpc 00:06:39.753 20:06:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=75252 00:06:39.753 20:06:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 75252 00:06:39.753 20:06:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 75252 ']' 00:06:39.753 20:06:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.753 20:06:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:39.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.753 20:06:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.753 20:06:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:39.753 20:06:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:39.753 20:06:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.753 [2024-07-14 20:06:28.711926] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:39.753 [2024-07-14 20:06:28.712644] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75252 ] 00:06:40.011 [2024-07-14 20:06:28.851628] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.011 [2024-07-14 20:06:28.943891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.943 20:06:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:40.944 20:06:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:40.944 20:06:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:40.944 20:06:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.944 20:06:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.944 20:06:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.944 20:06:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:40.944 20:06:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:40.944 20:06:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:40.944 20:06:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:40.944 20:06:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:40.944 20:06:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.944 20:06:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.944 20:06:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.944 20:06:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 75252 00:06:40.944 20:06:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 75252 00:06:40.944 20:06:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:41.201 20:06:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 75252 00:06:41.201 20:06:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@946 -- # '[' -z 75252 ']' 00:06:41.201 20:06:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # kill -0 75252 00:06:41.201 20:06:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # uname 00:06:41.201 20:06:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:41.201 20:06:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75252 00:06:41.201 20:06:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:41.201 20:06:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:41.201 killing process with pid 75252 00:06:41.201 20:06:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75252' 00:06:41.201 20:06:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@965 -- # kill 75252 00:06:41.201 20:06:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # wait 75252 00:06:41.767 00:06:41.767 real 0m1.933s 00:06:41.767 user 0m2.064s 00:06:41.767 sys 0m0.617s 00:06:41.767 20:06:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:41.767 ************************************ 00:06:41.767 END TEST default_locks_via_rpc 00:06:41.767 ************************************ 00:06:41.767 20:06:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.767 20:06:30 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:41.767 20:06:30 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:41.767 20:06:30 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:41.767 20:06:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:41.767 ************************************ 00:06:41.767 START TEST non_locking_app_on_locked_coremask 00:06:41.767 ************************************ 00:06:41.767 20:06:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # non_locking_app_on_locked_coremask 00:06:41.767 20:06:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=75321 00:06:41.767 20:06:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 75321 /var/tmp/spdk.sock 00:06:41.767 20:06:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 75321 ']' 00:06:41.767 20:06:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:41.767 20:06:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.767 20:06:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:41.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.767 20:06:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.767 20:06:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:41.767 20:06:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:41.767 [2024-07-14 20:06:30.692708] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:41.767 [2024-07-14 20:06:30.692819] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75321 ] 00:06:41.767 [2024-07-14 20:06:30.831558] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.025 [2024-07-14 20:06:30.918946] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.630 20:06:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:42.630 20:06:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:42.630 20:06:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=75349 00:06:42.630 20:06:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:42.630 20:06:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 75349 /var/tmp/spdk2.sock 00:06:42.630 20:06:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 75349 ']' 00:06:42.630 20:06:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:42.630 20:06:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:42.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:42.630 20:06:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:42.630 20:06:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:42.630 20:06:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:42.889 [2024-07-14 20:06:31.726956] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:42.889 [2024-07-14 20:06:31.727056] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75349 ] 00:06:42.889 [2024-07-14 20:06:31.870919] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:42.889 [2024-07-14 20:06:31.870973] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.147 [2024-07-14 20:06:32.040756] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.715 20:06:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:43.715 20:06:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:43.715 20:06:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 75321 00:06:43.715 20:06:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 75321 00:06:43.715 20:06:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:44.647 20:06:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 75321 00:06:44.647 20:06:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 75321 ']' 00:06:44.647 20:06:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 75321 00:06:44.647 20:06:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:44.647 20:06:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:44.647 20:06:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75321 00:06:44.647 20:06:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:44.647 20:06:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:44.647 killing process with pid 75321 00:06:44.647 20:06:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75321' 00:06:44.647 20:06:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 75321 00:06:44.647 20:06:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 75321 00:06:45.213 20:06:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 75349 00:06:45.213 20:06:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 75349 ']' 00:06:45.213 20:06:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 75349 00:06:45.213 20:06:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:45.213 20:06:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:45.213 20:06:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75349 00:06:45.471 20:06:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:45.471 20:06:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:45.471 killing process with pid 75349 00:06:45.471 20:06:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75349' 00:06:45.471 20:06:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 75349 00:06:45.471 20:06:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 75349 00:06:45.728 00:06:45.728 real 0m4.040s 00:06:45.728 user 0m4.461s 00:06:45.728 sys 0m1.207s 00:06:45.728 20:06:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:45.728 20:06:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:45.728 ************************************ 00:06:45.728 END TEST non_locking_app_on_locked_coremask 00:06:45.728 ************************************ 00:06:45.728 20:06:34 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:45.728 20:06:34 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:45.728 20:06:34 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:45.728 20:06:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:45.728 ************************************ 00:06:45.728 START TEST locking_app_on_unlocked_coremask 00:06:45.728 ************************************ 00:06:45.728 20:06:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_unlocked_coremask 00:06:45.728 20:06:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=75431 00:06:45.728 20:06:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:45.728 20:06:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 75431 /var/tmp/spdk.sock 00:06:45.728 20:06:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 75431 ']' 00:06:45.728 20:06:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.728 20:06:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:45.728 20:06:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.728 20:06:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:45.728 20:06:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:45.728 [2024-07-14 20:06:34.769961] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:45.728 [2024-07-14 20:06:34.770075] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75431 ] 00:06:45.985 [2024-07-14 20:06:34.899676] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:45.985 [2024-07-14 20:06:34.899746] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.985 [2024-07-14 20:06:34.978587] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.244 20:06:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:46.244 20:06:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:46.244 20:06:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=75440 00:06:46.244 20:06:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:46.244 20:06:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 75440 /var/tmp/spdk2.sock 00:06:46.244 20:06:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 75440 ']' 00:06:46.244 20:06:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:46.244 20:06:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:46.244 20:06:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:46.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:46.244 20:06:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:46.244 20:06:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:46.244 [2024-07-14 20:06:35.299287] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:46.244 [2024-07-14 20:06:35.299387] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75440 ] 00:06:46.502 [2024-07-14 20:06:35.444029] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.760 [2024-07-14 20:06:35.595124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.326 20:06:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:47.326 20:06:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:47.326 20:06:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 75440 00:06:47.326 20:06:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 75440 00:06:47.326 20:06:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:48.261 20:06:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 75431 00:06:48.261 20:06:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 75431 ']' 00:06:48.261 20:06:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 75431 00:06:48.261 20:06:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:48.261 20:06:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:48.261 20:06:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75431 00:06:48.261 20:06:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:48.261 20:06:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:48.261 killing process with pid 75431 00:06:48.261 20:06:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75431' 00:06:48.261 20:06:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 75431 00:06:48.261 20:06:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 75431 00:06:48.829 20:06:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 75440 00:06:48.829 20:06:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 75440 ']' 00:06:48.829 20:06:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 75440 00:06:48.829 20:06:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:48.829 20:06:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:48.829 20:06:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75440 00:06:48.829 20:06:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:48.829 20:06:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:48.829 killing process with pid 75440 00:06:48.829 20:06:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75440' 00:06:48.829 20:06:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 75440 00:06:48.829 20:06:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 75440 00:06:49.395 00:06:49.395 real 0m3.521s 00:06:49.395 user 0m3.738s 00:06:49.395 sys 0m1.107s 00:06:49.395 20:06:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:49.395 ************************************ 00:06:49.395 END TEST locking_app_on_unlocked_coremask 00:06:49.395 ************************************ 00:06:49.395 20:06:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:49.395 20:06:38 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:49.395 20:06:38 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:49.395 20:06:38 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:49.395 20:06:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:49.395 ************************************ 00:06:49.395 START TEST locking_app_on_locked_coremask 00:06:49.395 ************************************ 00:06:49.395 20:06:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_locked_coremask 00:06:49.395 20:06:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=75519 00:06:49.395 20:06:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 75519 /var/tmp/spdk.sock 00:06:49.395 20:06:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 75519 ']' 00:06:49.395 20:06:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.395 20:06:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:49.395 20:06:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:49.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.395 20:06:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.395 20:06:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:49.395 20:06:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:49.395 [2024-07-14 20:06:38.365151] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:49.395 [2024-07-14 20:06:38.365252] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75519 ] 00:06:49.653 [2024-07-14 20:06:38.506752] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.653 [2024-07-14 20:06:38.597397] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.586 20:06:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:50.586 20:06:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:50.586 20:06:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:50.586 20:06:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=75547 00:06:50.586 20:06:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 75547 /var/tmp/spdk2.sock 00:06:50.586 20:06:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:50.586 20:06:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 75547 /var/tmp/spdk2.sock 00:06:50.586 20:06:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:50.586 20:06:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:50.586 20:06:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:50.586 20:06:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:50.586 20:06:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 75547 /var/tmp/spdk2.sock 00:06:50.586 20:06:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 75547 ']' 00:06:50.586 20:06:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:50.586 20:06:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:50.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:50.586 20:06:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:50.586 20:06:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:50.586 20:06:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:50.586 [2024-07-14 20:06:39.372599] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:50.586 [2024-07-14 20:06:39.372712] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75547 ] 00:06:50.586 [2024-07-14 20:06:39.509711] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 75519 has claimed it. 00:06:50.586 [2024-07-14 20:06:39.509793] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:51.151 ERROR: process (pid: 75547) is no longer running 00:06:51.151 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 842: kill: (75547) - No such process 00:06:51.151 20:06:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:51.151 20:06:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 1 00:06:51.151 20:06:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:51.151 20:06:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:51.151 20:06:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:51.152 20:06:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:51.152 20:06:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 75519 00:06:51.152 20:06:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 75519 00:06:51.152 20:06:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:51.410 20:06:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 75519 00:06:51.410 20:06:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 75519 ']' 00:06:51.410 20:06:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 75519 00:06:51.410 20:06:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:51.410 20:06:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:51.410 20:06:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75519 00:06:51.410 20:06:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:51.410 20:06:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:51.410 killing process with pid 75519 00:06:51.410 20:06:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75519' 00:06:51.410 20:06:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 75519 00:06:51.410 20:06:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 75519 00:06:51.995 00:06:51.995 real 0m2.528s 00:06:51.995 user 0m2.892s 00:06:51.995 sys 0m0.605s 00:06:51.995 20:06:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:51.995 20:06:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:51.995 ************************************ 00:06:51.995 END TEST locking_app_on_locked_coremask 00:06:51.995 ************************************ 00:06:51.995 20:06:40 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:51.995 20:06:40 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:51.995 20:06:40 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:51.995 20:06:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:51.995 ************************************ 00:06:51.995 START TEST locking_overlapped_coremask 00:06:51.995 ************************************ 00:06:51.995 20:06:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask 00:06:51.995 20:06:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=75604 00:06:51.995 20:06:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 75604 /var/tmp/spdk.sock 00:06:51.995 20:06:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 75604 ']' 00:06:51.995 20:06:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:51.995 20:06:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.995 20:06:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:51.995 20:06:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.995 20:06:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:51.995 20:06:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:51.995 [2024-07-14 20:06:40.937392] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:51.995 [2024-07-14 20:06:40.937525] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75604 ] 00:06:51.995 [2024-07-14 20:06:41.068348] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:52.253 [2024-07-14 20:06:41.176013] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:52.253 [2024-07-14 20:06:41.176167] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:52.253 [2024-07-14 20:06:41.176173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.188 20:06:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:53.188 20:06:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:53.188 20:06:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=75634 00:06:53.188 20:06:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:53.188 20:06:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 75634 /var/tmp/spdk2.sock 00:06:53.188 20:06:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:53.188 20:06:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 75634 /var/tmp/spdk2.sock 00:06:53.188 20:06:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:53.188 20:06:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:53.188 20:06:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:53.188 20:06:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:53.188 20:06:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 75634 /var/tmp/spdk2.sock 00:06:53.188 20:06:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 75634 ']' 00:06:53.188 20:06:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:53.188 20:06:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:53.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:53.188 20:06:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:53.188 20:06:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:53.188 20:06:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:53.188 [2024-07-14 20:06:42.021411] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:53.188 [2024-07-14 20:06:42.021554] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75634 ] 00:06:53.188 [2024-07-14 20:06:42.165428] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 75604 has claimed it. 00:06:53.188 [2024-07-14 20:06:42.165509] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:53.755 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 842: kill: (75634) - No such process 00:06:53.755 ERROR: process (pid: 75634) is no longer running 00:06:53.755 20:06:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:53.755 20:06:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 1 00:06:53.755 20:06:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:53.755 20:06:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:53.755 20:06:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:53.755 20:06:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:53.755 20:06:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:53.755 20:06:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:53.755 20:06:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:53.755 20:06:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:53.755 20:06:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 75604 00:06:53.755 20:06:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@946 -- # '[' -z 75604 ']' 00:06:53.755 20:06:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # kill -0 75604 00:06:53.755 20:06:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # uname 00:06:53.755 20:06:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:53.755 20:06:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75604 00:06:53.755 20:06:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:53.755 20:06:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:53.755 killing process with pid 75604 00:06:53.755 20:06:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75604' 00:06:53.755 20:06:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@965 -- # kill 75604 00:06:53.755 20:06:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # wait 75604 00:06:54.323 00:06:54.323 real 0m2.333s 00:06:54.323 user 0m6.603s 00:06:54.323 sys 0m0.485s 00:06:54.323 20:06:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:54.323 20:06:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:54.323 ************************************ 00:06:54.323 END TEST locking_overlapped_coremask 00:06:54.323 ************************************ 00:06:54.323 20:06:43 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:54.323 20:06:43 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:54.323 20:06:43 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:54.323 20:06:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:54.323 ************************************ 00:06:54.323 START TEST locking_overlapped_coremask_via_rpc 00:06:54.323 ************************************ 00:06:54.323 20:06:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask_via_rpc 00:06:54.323 20:06:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=75680 00:06:54.323 20:06:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 75680 /var/tmp/spdk.sock 00:06:54.323 20:06:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:54.323 20:06:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 75680 ']' 00:06:54.323 20:06:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.323 20:06:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:54.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.323 20:06:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.323 20:06:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:54.323 20:06:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:54.323 [2024-07-14 20:06:43.316769] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:54.323 [2024-07-14 20:06:43.316871] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75680 ] 00:06:54.581 [2024-07-14 20:06:43.448706] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:54.581 [2024-07-14 20:06:43.448774] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:54.581 [2024-07-14 20:06:43.542676] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:54.581 [2024-07-14 20:06:43.542797] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:54.581 [2024-07-14 20:06:43.542806] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.517 20:06:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:55.517 20:06:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:55.517 20:06:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=75710 00:06:55.517 20:06:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 75710 /var/tmp/spdk2.sock 00:06:55.517 20:06:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:55.517 20:06:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 75710 ']' 00:06:55.517 20:06:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:55.517 20:06:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:55.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:55.517 20:06:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:55.517 20:06:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:55.517 20:06:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.517 [2024-07-14 20:06:44.401097] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:55.517 [2024-07-14 20:06:44.401215] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75710 ] 00:06:55.517 [2024-07-14 20:06:44.545073] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:55.517 [2024-07-14 20:06:44.545142] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:55.775 [2024-07-14 20:06:44.709268] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:55.775 [2024-07-14 20:06:44.709414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:55.775 [2024-07-14 20:06:44.709421] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:56.343 20:06:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:56.343 20:06:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:56.343 20:06:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:56.343 20:06:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.343 20:06:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.343 20:06:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.343 20:06:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:56.343 20:06:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:56.343 20:06:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:56.343 20:06:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:56.343 20:06:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:56.343 20:06:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:56.343 20:06:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:56.343 20:06:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:56.343 20:06:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.343 20:06:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.343 [2024-07-14 20:06:45.421160] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 75680 has claimed it. 00:06:56.602 2024/07/14 20:06:45 error on JSON-RPC call, method: framework_enable_cpumask_locks, params: map[], err: error received for framework_enable_cpumask_locks method, err: Code=-32603 Msg=Failed to claim CPU core: 2 00:06:56.602 request: 00:06:56.602 { 00:06:56.602 "method": "framework_enable_cpumask_locks", 00:06:56.602 "params": {} 00:06:56.602 } 00:06:56.602 Got JSON-RPC error response 00:06:56.602 GoRPCClient: error on JSON-RPC call 00:06:56.602 20:06:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:56.602 20:06:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:56.602 20:06:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:56.602 20:06:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:56.602 20:06:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:56.602 20:06:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 75680 /var/tmp/spdk.sock 00:06:56.602 20:06:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 75680 ']' 00:06:56.602 20:06:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.602 20:06:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:56.602 20:06:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.602 20:06:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:56.602 20:06:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.860 20:06:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:56.860 20:06:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:56.860 20:06:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 75710 /var/tmp/spdk2.sock 00:06:56.860 20:06:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 75710 ']' 00:06:56.860 20:06:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:56.860 20:06:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:56.860 20:06:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:56.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:56.860 20:06:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:56.860 20:06:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:57.119 ************************************ 00:06:57.119 END TEST locking_overlapped_coremask_via_rpc 00:06:57.119 ************************************ 00:06:57.119 20:06:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:57.119 20:06:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:57.119 20:06:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:57.119 20:06:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:57.119 20:06:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:57.119 20:06:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:57.119 00:06:57.119 real 0m2.702s 00:06:57.119 user 0m1.400s 00:06:57.119 sys 0m0.229s 00:06:57.119 20:06:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:57.119 20:06:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:57.119 20:06:46 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:57.119 20:06:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 75680 ]] 00:06:57.119 20:06:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 75680 00:06:57.119 20:06:46 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 75680 ']' 00:06:57.119 20:06:46 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 75680 00:06:57.119 20:06:46 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:06:57.120 20:06:46 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:57.120 20:06:46 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75680 00:06:57.120 killing process with pid 75680 00:06:57.120 20:06:46 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:57.120 20:06:46 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:57.120 20:06:46 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75680' 00:06:57.120 20:06:46 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 75680 00:06:57.120 20:06:46 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 75680 00:06:57.378 20:06:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 75710 ]] 00:06:57.378 20:06:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 75710 00:06:57.378 20:06:46 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 75710 ']' 00:06:57.378 20:06:46 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 75710 00:06:57.378 20:06:46 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:06:57.378 20:06:46 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:57.378 20:06:46 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75710 00:06:57.378 killing process with pid 75710 00:06:57.378 20:06:46 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:06:57.378 20:06:46 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:06:57.378 20:06:46 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75710' 00:06:57.378 20:06:46 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 75710 00:06:57.378 20:06:46 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 75710 00:06:57.946 20:06:47 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:57.946 Process with pid 75680 is not found 00:06:57.946 Process with pid 75710 is not found 00:06:57.946 20:06:47 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:57.946 20:06:47 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 75680 ]] 00:06:57.946 20:06:47 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 75680 00:06:57.946 20:06:47 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 75680 ']' 00:06:57.946 20:06:47 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 75680 00:06:57.946 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (75680) - No such process 00:06:57.946 20:06:47 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 75680 is not found' 00:06:57.946 20:06:47 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 75710 ]] 00:06:57.946 20:06:47 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 75710 00:06:57.946 20:06:47 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 75710 ']' 00:06:57.946 20:06:47 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 75710 00:06:57.946 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (75710) - No such process 00:06:57.946 20:06:47 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 75710 is not found' 00:06:57.946 20:06:47 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:57.946 00:06:57.946 real 0m20.513s 00:06:57.946 user 0m36.444s 00:06:57.946 sys 0m5.827s 00:06:57.946 20:06:47 event.cpu_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:57.946 ************************************ 00:06:57.946 END TEST cpu_locks 00:06:57.946 ************************************ 00:06:57.946 20:06:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:58.205 00:06:58.205 real 0m48.204s 00:06:58.205 user 1m33.783s 00:06:58.205 sys 0m9.634s 00:06:58.205 20:06:47 event -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:58.205 20:06:47 event -- common/autotest_common.sh@10 -- # set +x 00:06:58.205 ************************************ 00:06:58.205 END TEST event 00:06:58.205 ************************************ 00:06:58.205 20:06:47 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:58.205 20:06:47 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:58.205 20:06:47 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:58.205 20:06:47 -- common/autotest_common.sh@10 -- # set +x 00:06:58.205 ************************************ 00:06:58.205 START TEST thread 00:06:58.205 ************************************ 00:06:58.205 20:06:47 thread -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:58.205 * Looking for test storage... 00:06:58.205 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:58.205 20:06:47 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:58.205 20:06:47 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:06:58.205 20:06:47 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:58.205 20:06:47 thread -- common/autotest_common.sh@10 -- # set +x 00:06:58.205 ************************************ 00:06:58.205 START TEST thread_poller_perf 00:06:58.205 ************************************ 00:06:58.205 20:06:47 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:58.205 [2024-07-14 20:06:47.220615] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:58.205 [2024-07-14 20:06:47.220707] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75862 ] 00:06:58.464 [2024-07-14 20:06:47.359244] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.464 [2024-07-14 20:06:47.448673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.464 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:59.840 ====================================== 00:06:59.840 busy:2207278736 (cyc) 00:06:59.840 total_run_count: 357000 00:06:59.840 tsc_hz: 2200000000 (cyc) 00:06:59.840 ====================================== 00:06:59.840 poller_cost: 6182 (cyc), 2810 (nsec) 00:06:59.840 00:06:59.840 real 0m1.318s 00:06:59.840 user 0m1.156s 00:06:59.840 sys 0m0.055s 00:06:59.840 20:06:48 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:59.840 20:06:48 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:59.841 ************************************ 00:06:59.841 END TEST thread_poller_perf 00:06:59.841 ************************************ 00:06:59.841 20:06:48 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:59.841 20:06:48 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:06:59.841 20:06:48 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:59.841 20:06:48 thread -- common/autotest_common.sh@10 -- # set +x 00:06:59.841 ************************************ 00:06:59.841 START TEST thread_poller_perf 00:06:59.841 ************************************ 00:06:59.841 20:06:48 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:59.841 [2024-07-14 20:06:48.591671] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:59.841 [2024-07-14 20:06:48.591789] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75892 ] 00:06:59.841 [2024-07-14 20:06:48.727951] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.841 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:59.841 [2024-07-14 20:06:48.789431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.217 ====================================== 00:07:01.217 busy:2202155106 (cyc) 00:07:01.217 total_run_count: 5292000 00:07:01.217 tsc_hz: 2200000000 (cyc) 00:07:01.217 ====================================== 00:07:01.217 poller_cost: 416 (cyc), 189 (nsec) 00:07:01.217 00:07:01.217 real 0m1.292s 00:07:01.217 user 0m1.124s 00:07:01.217 sys 0m0.061s 00:07:01.217 20:06:49 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:01.217 20:06:49 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:01.217 ************************************ 00:07:01.217 END TEST thread_poller_perf 00:07:01.217 ************************************ 00:07:01.217 20:06:49 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:01.217 00:07:01.217 real 0m2.805s 00:07:01.217 user 0m2.354s 00:07:01.217 sys 0m0.230s 00:07:01.217 20:06:49 thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:01.217 20:06:49 thread -- common/autotest_common.sh@10 -- # set +x 00:07:01.217 ************************************ 00:07:01.217 END TEST thread 00:07:01.217 ************************************ 00:07:01.217 20:06:49 -- spdk/autotest.sh@183 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:07:01.217 20:06:49 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:01.217 20:06:49 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:01.217 20:06:49 -- common/autotest_common.sh@10 -- # set +x 00:07:01.217 ************************************ 00:07:01.217 START TEST accel 00:07:01.217 ************************************ 00:07:01.217 20:06:49 accel -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:07:01.217 * Looking for test storage... 00:07:01.217 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:01.217 20:06:50 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:07:01.217 20:06:50 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:07:01.217 20:06:50 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:01.217 20:06:50 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=75968 00:07:01.217 20:06:50 accel -- accel/accel.sh@63 -- # waitforlisten 75968 00:07:01.217 20:06:50 accel -- common/autotest_common.sh@827 -- # '[' -z 75968 ']' 00:07:01.217 20:06:50 accel -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.217 20:06:50 accel -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:01.217 20:06:50 accel -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.217 20:06:50 accel -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:01.217 20:06:50 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:01.217 20:06:50 accel -- common/autotest_common.sh@10 -- # set +x 00:07:01.217 20:06:50 accel -- accel/accel.sh@61 -- # build_accel_config 00:07:01.217 20:06:50 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:01.217 20:06:50 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:01.217 20:06:50 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.217 20:06:50 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.217 20:06:50 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:01.217 20:06:50 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:01.217 20:06:50 accel -- accel/accel.sh@41 -- # jq -r . 00:07:01.217 [2024-07-14 20:06:50.122509] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:01.217 [2024-07-14 20:06:50.122631] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75968 ] 00:07:01.217 [2024-07-14 20:06:50.259393] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.476 [2024-07-14 20:06:50.344902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.412 20:06:51 accel -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:02.412 20:06:51 accel -- common/autotest_common.sh@860 -- # return 0 00:07:02.413 20:06:51 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:07:02.413 20:06:51 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:07:02.413 20:06:51 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:07:02.413 20:06:51 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:07:02.413 20:06:51 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:02.413 20:06:51 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:02.413 20:06:51 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:07:02.413 20:06:51 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:02.413 20:06:51 accel -- common/autotest_common.sh@10 -- # set +x 00:07:02.413 20:06:51 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:02.413 20:06:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:02.413 20:06:51 accel -- accel/accel.sh@72 -- # IFS== 00:07:02.413 20:06:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:02.413 20:06:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:02.413 20:06:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:02.413 20:06:51 accel -- accel/accel.sh@72 -- # IFS== 00:07:02.413 20:06:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:02.413 20:06:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:02.413 20:06:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:02.413 20:06:51 accel -- accel/accel.sh@72 -- # IFS== 00:07:02.413 20:06:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:02.413 20:06:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:02.413 20:06:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:02.413 20:06:51 accel -- accel/accel.sh@72 -- # IFS== 00:07:02.413 20:06:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:02.413 20:06:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:02.413 20:06:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:02.413 20:06:51 accel -- accel/accel.sh@72 -- # IFS== 00:07:02.413 20:06:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:02.413 20:06:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:02.413 20:06:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:02.413 20:06:51 accel -- accel/accel.sh@72 -- # IFS== 00:07:02.413 20:06:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:02.413 20:06:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:02.413 20:06:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:02.413 20:06:51 accel -- accel/accel.sh@72 -- # IFS== 00:07:02.413 20:06:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:02.413 20:06:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:02.413 20:06:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:02.413 20:06:51 accel -- accel/accel.sh@72 -- # IFS== 00:07:02.413 20:06:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:02.413 20:06:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:02.413 20:06:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:02.413 20:06:51 accel -- accel/accel.sh@72 -- # IFS== 00:07:02.413 20:06:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:02.413 20:06:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:02.413 20:06:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:02.413 20:06:51 accel -- accel/accel.sh@72 -- # IFS== 00:07:02.413 20:06:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:02.413 20:06:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:02.413 20:06:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:02.413 20:06:51 accel -- accel/accel.sh@72 -- # IFS== 00:07:02.413 20:06:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:02.413 20:06:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:02.413 20:06:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:02.413 20:06:51 accel -- accel/accel.sh@72 -- # IFS== 00:07:02.413 20:06:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:02.413 20:06:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:02.413 20:06:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:02.413 20:06:51 accel -- accel/accel.sh@72 -- # IFS== 00:07:02.413 20:06:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:02.413 20:06:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:02.413 20:06:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:02.413 20:06:51 accel -- accel/accel.sh@72 -- # IFS== 00:07:02.413 20:06:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:02.413 20:06:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:02.413 20:06:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:02.413 20:06:51 accel -- accel/accel.sh@72 -- # IFS== 00:07:02.413 20:06:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:02.413 20:06:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:02.413 20:06:51 accel -- accel/accel.sh@75 -- # killprocess 75968 00:07:02.413 20:06:51 accel -- common/autotest_common.sh@946 -- # '[' -z 75968 ']' 00:07:02.413 20:06:51 accel -- common/autotest_common.sh@950 -- # kill -0 75968 00:07:02.413 20:06:51 accel -- common/autotest_common.sh@951 -- # uname 00:07:02.413 20:06:51 accel -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:02.413 20:06:51 accel -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75968 00:07:02.413 20:06:51 accel -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:02.413 20:06:51 accel -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:02.413 killing process with pid 75968 00:07:02.413 20:06:51 accel -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75968' 00:07:02.413 20:06:51 accel -- common/autotest_common.sh@965 -- # kill 75968 00:07:02.413 20:06:51 accel -- common/autotest_common.sh@970 -- # wait 75968 00:07:02.670 20:06:51 accel -- accel/accel.sh@76 -- # trap - ERR 00:07:02.670 20:06:51 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:07:02.670 20:06:51 accel -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:02.670 20:06:51 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:02.670 20:06:51 accel -- common/autotest_common.sh@10 -- # set +x 00:07:02.670 20:06:51 accel.accel_help -- common/autotest_common.sh@1121 -- # accel_perf -h 00:07:02.670 20:06:51 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:02.670 20:06:51 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:07:02.670 20:06:51 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:02.670 20:06:51 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:02.670 20:06:51 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.670 20:06:51 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.670 20:06:51 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:02.670 20:06:51 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:07:02.670 20:06:51 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:07:02.670 20:06:51 accel.accel_help -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:02.670 20:06:51 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:07:02.670 20:06:51 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:02.670 20:06:51 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:02.670 20:06:51 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:02.670 20:06:51 accel -- common/autotest_common.sh@10 -- # set +x 00:07:02.670 ************************************ 00:07:02.670 START TEST accel_missing_filename 00:07:02.670 ************************************ 00:07:02.670 20:06:51 accel.accel_missing_filename -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress 00:07:02.670 20:06:51 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:07:02.670 20:06:51 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:02.670 20:06:51 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:02.670 20:06:51 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:02.670 20:06:51 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:02.670 20:06:51 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:02.670 20:06:51 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:07:02.670 20:06:51 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:02.670 20:06:51 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:07:02.670 20:06:51 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:02.670 20:06:51 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:02.670 20:06:51 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.670 20:06:51 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.670 20:06:51 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:02.670 20:06:51 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:07:02.670 20:06:51 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:07:02.671 [2024-07-14 20:06:51.725638] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:02.671 [2024-07-14 20:06:51.725740] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76036 ] 00:07:02.928 [2024-07-14 20:06:51.862432] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.928 [2024-07-14 20:06:51.923381] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.928 [2024-07-14 20:06:51.979296] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:03.187 [2024-07-14 20:06:52.059829] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:07:03.187 A filename is required. 00:07:03.187 20:06:52 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:07:03.187 20:06:52 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:03.187 20:06:52 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:07:03.187 20:06:52 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:07:03.187 20:06:52 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:07:03.187 20:06:52 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:03.187 00:07:03.187 real 0m0.442s 00:07:03.187 user 0m0.272s 00:07:03.187 sys 0m0.109s 00:07:03.187 20:06:52 accel.accel_missing_filename -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:03.187 20:06:52 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:07:03.187 ************************************ 00:07:03.187 END TEST accel_missing_filename 00:07:03.187 ************************************ 00:07:03.187 20:06:52 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:03.187 20:06:52 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:07:03.187 20:06:52 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:03.187 20:06:52 accel -- common/autotest_common.sh@10 -- # set +x 00:07:03.187 ************************************ 00:07:03.187 START TEST accel_compress_verify 00:07:03.187 ************************************ 00:07:03.187 20:06:52 accel.accel_compress_verify -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:03.187 20:06:52 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:07:03.187 20:06:52 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:03.187 20:06:52 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:03.187 20:06:52 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:03.187 20:06:52 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:03.187 20:06:52 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:03.187 20:06:52 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:03.187 20:06:52 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:03.187 20:06:52 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:03.187 20:06:52 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:03.187 20:06:52 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:03.187 20:06:52 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.187 20:06:52 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.187 20:06:52 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:03.187 20:06:52 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:03.187 20:06:52 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:07:03.187 [2024-07-14 20:06:52.216173] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:03.187 [2024-07-14 20:06:52.216265] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76066 ] 00:07:03.446 [2024-07-14 20:06:52.353767] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.446 [2024-07-14 20:06:52.408327] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.446 [2024-07-14 20:06:52.461555] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:03.705 [2024-07-14 20:06:52.540268] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:07:03.705 00:07:03.705 Compression does not support the verify option, aborting. 00:07:03.706 20:06:52 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:07:03.706 20:06:52 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:03.706 20:06:52 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:07:03.706 20:06:52 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:07:03.706 20:06:52 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:07:03.706 20:06:52 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:03.706 00:07:03.706 real 0m0.428s 00:07:03.706 user 0m0.264s 00:07:03.706 sys 0m0.112s 00:07:03.706 20:06:52 accel.accel_compress_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:03.706 ************************************ 00:07:03.706 END TEST accel_compress_verify 00:07:03.706 ************************************ 00:07:03.706 20:06:52 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:07:03.706 20:06:52 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:03.706 20:06:52 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:03.706 20:06:52 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:03.706 20:06:52 accel -- common/autotest_common.sh@10 -- # set +x 00:07:03.706 ************************************ 00:07:03.706 START TEST accel_wrong_workload 00:07:03.706 ************************************ 00:07:03.706 20:06:52 accel.accel_wrong_workload -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w foobar 00:07:03.706 20:06:52 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:07:03.706 20:06:52 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:03.706 20:06:52 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:03.706 20:06:52 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:03.706 20:06:52 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:03.706 20:06:52 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:03.706 20:06:52 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:07:03.706 20:06:52 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:03.706 20:06:52 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:07:03.706 20:06:52 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:03.706 20:06:52 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:03.706 20:06:52 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.706 20:06:52 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.706 20:06:52 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:03.706 20:06:52 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:07:03.706 20:06:52 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:07:03.706 Unsupported workload type: foobar 00:07:03.706 [2024-07-14 20:06:52.693167] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:03.706 accel_perf options: 00:07:03.706 [-h help message] 00:07:03.706 [-q queue depth per core] 00:07:03.706 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:03.706 [-T number of threads per core 00:07:03.706 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:03.706 [-t time in seconds] 00:07:03.706 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:03.706 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:03.706 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:03.706 [-l for compress/decompress workloads, name of uncompressed input file 00:07:03.706 [-S for crc32c workload, use this seed value (default 0) 00:07:03.706 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:03.706 [-f for fill workload, use this BYTE value (default 255) 00:07:03.706 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:03.706 [-y verify result if this switch is on] 00:07:03.706 [-a tasks to allocate per core (default: same value as -q)] 00:07:03.706 Can be used to spread operations across a wider range of memory. 00:07:03.706 20:06:52 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:07:03.706 20:06:52 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:03.706 20:06:52 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:03.706 20:06:52 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:03.706 00:07:03.706 real 0m0.031s 00:07:03.706 user 0m0.020s 00:07:03.706 sys 0m0.010s 00:07:03.706 20:06:52 accel.accel_wrong_workload -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:03.706 20:06:52 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:07:03.706 ************************************ 00:07:03.706 END TEST accel_wrong_workload 00:07:03.706 ************************************ 00:07:03.706 20:06:52 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:03.706 20:06:52 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:07:03.706 20:06:52 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:03.706 20:06:52 accel -- common/autotest_common.sh@10 -- # set +x 00:07:03.706 ************************************ 00:07:03.706 START TEST accel_negative_buffers 00:07:03.706 ************************************ 00:07:03.706 20:06:52 accel.accel_negative_buffers -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:03.706 20:06:52 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:07:03.706 20:06:52 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:03.706 20:06:52 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:03.706 20:06:52 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:03.706 20:06:52 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:03.706 20:06:52 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:03.706 20:06:52 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:07:03.706 20:06:52 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:03.706 20:06:52 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:07:03.706 20:06:52 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:03.706 20:06:52 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:03.706 20:06:52 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.706 20:06:52 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.706 20:06:52 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:03.706 20:06:52 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:07:03.706 20:06:52 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:07:03.706 -x option must be non-negative. 00:07:03.706 [2024-07-14 20:06:52.771606] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:03.706 accel_perf options: 00:07:03.706 [-h help message] 00:07:03.706 [-q queue depth per core] 00:07:03.706 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:03.706 [-T number of threads per core 00:07:03.706 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:03.706 [-t time in seconds] 00:07:03.706 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:03.706 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:03.706 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:03.706 [-l for compress/decompress workloads, name of uncompressed input file 00:07:03.706 [-S for crc32c workload, use this seed value (default 0) 00:07:03.706 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:03.706 [-f for fill workload, use this BYTE value (default 255) 00:07:03.706 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:03.706 [-y verify result if this switch is on] 00:07:03.706 [-a tasks to allocate per core (default: same value as -q)] 00:07:03.706 Can be used to spread operations across a wider range of memory. 00:07:03.706 20:06:52 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:07:03.706 20:06:52 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:03.706 20:06:52 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:03.706 20:06:52 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:03.706 00:07:03.706 real 0m0.028s 00:07:03.706 user 0m0.012s 00:07:03.706 sys 0m0.016s 00:07:03.706 20:06:52 accel.accel_negative_buffers -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:03.706 ************************************ 00:07:03.706 END TEST accel_negative_buffers 00:07:03.706 ************************************ 00:07:03.706 20:06:52 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:07:03.967 20:06:52 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:03.967 20:06:52 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:03.967 20:06:52 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:03.967 20:06:52 accel -- common/autotest_common.sh@10 -- # set +x 00:07:03.967 ************************************ 00:07:03.967 START TEST accel_crc32c 00:07:03.967 ************************************ 00:07:03.967 20:06:52 accel.accel_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:03.967 20:06:52 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:03.967 20:06:52 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:03.967 20:06:52 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.967 20:06:52 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.967 20:06:52 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:03.967 20:06:52 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:03.967 20:06:52 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:03.967 20:06:52 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:03.967 20:06:52 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:03.967 20:06:52 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.967 20:06:52 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.967 20:06:52 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:03.967 20:06:52 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:03.967 20:06:52 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:03.967 [2024-07-14 20:06:52.844717] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:03.967 [2024-07-14 20:06:52.844795] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76119 ] 00:07:03.967 [2024-07-14 20:06:52.981768] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.227 [2024-07-14 20:06:53.055461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.227 20:06:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:04.227 20:06:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.227 20:06:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.227 20:06:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.227 20:06:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:04.227 20:06:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.227 20:06:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.227 20:06:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.227 20:06:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:04.227 20:06:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.227 20:06:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.227 20:06:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.227 20:06:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:04.227 20:06:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.227 20:06:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.227 20:06:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.227 20:06:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:04.227 20:06:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.227 20:06:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.227 20:06:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.227 20:06:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:07:04.227 20:06:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.227 20:06:53 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:04.227 20:06:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.227 20:06:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.227 20:06:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:04.227 20:06:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.227 20:06:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.227 20:06:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.227 20:06:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:04.227 20:06:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.227 20:06:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.227 20:06:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.227 20:06:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:04.227 20:06:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.227 20:06:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.227 20:06:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.227 20:06:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:07:04.227 20:06:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.227 20:06:53 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:04.227 20:06:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.227 20:06:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.227 20:06:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:04.227 20:06:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.227 20:06:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.227 20:06:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.227 20:06:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:04.227 20:06:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.227 20:06:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.227 20:06:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.227 20:06:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:07:04.227 20:06:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.227 20:06:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.227 20:06:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.227 20:06:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:04.227 20:06:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.227 20:06:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.227 20:06:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.227 20:06:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:04.227 20:06:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.227 20:06:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.227 20:06:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.227 20:06:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:04.227 20:06:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.227 20:06:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.227 20:06:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.227 20:06:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:04.227 20:06:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.227 20:06:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.227 20:06:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.603 20:06:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:05.603 20:06:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.603 20:06:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.603 20:06:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.603 20:06:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:05.603 20:06:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.603 20:06:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.603 20:06:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.603 20:06:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:05.603 20:06:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.603 20:06:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.603 20:06:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.603 20:06:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:05.603 20:06:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.603 20:06:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.603 20:06:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.603 20:06:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:05.603 20:06:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.603 20:06:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.603 20:06:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.603 20:06:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:05.603 20:06:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.603 20:06:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.603 20:06:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.603 20:06:54 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:05.603 20:06:54 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:05.603 20:06:54 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:05.603 00:07:05.603 real 0m1.476s 00:07:05.603 user 0m1.244s 00:07:05.603 sys 0m0.135s 00:07:05.603 20:06:54 accel.accel_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:05.603 20:06:54 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:05.603 ************************************ 00:07:05.603 END TEST accel_crc32c 00:07:05.603 ************************************ 00:07:05.603 20:06:54 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:05.603 20:06:54 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:05.603 20:06:54 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:05.603 20:06:54 accel -- common/autotest_common.sh@10 -- # set +x 00:07:05.603 ************************************ 00:07:05.603 START TEST accel_crc32c_C2 00:07:05.603 ************************************ 00:07:05.603 20:06:54 accel.accel_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:05.603 20:06:54 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:05.603 20:06:54 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:05.603 20:06:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.603 20:06:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.603 20:06:54 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:05.603 20:06:54 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:05.603 20:06:54 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:05.603 20:06:54 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:05.603 20:06:54 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:05.603 20:06:54 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.603 20:06:54 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.603 20:06:54 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:05.603 20:06:54 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:05.603 20:06:54 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:05.603 [2024-07-14 20:06:54.371349] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:05.603 [2024-07-14 20:06:54.371466] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76159 ] 00:07:05.603 [2024-07-14 20:06:54.500072] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.603 [2024-07-14 20:06:54.594049] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.603 20:06:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:05.603 20:06:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.603 20:06:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.603 20:06:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.603 20:06:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:05.603 20:06:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.603 20:06:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.603 20:06:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.603 20:06:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:05.603 20:06:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.603 20:06:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.603 20:06:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.603 20:06:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:05.603 20:06:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.603 20:06:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.603 20:06:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.603 20:06:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:05.603 20:06:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.603 20:06:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.603 20:06:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.603 20:06:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:07:05.603 20:06:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.603 20:06:54 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:05.603 20:06:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.603 20:06:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.603 20:06:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:05.603 20:06:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.603 20:06:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.603 20:06:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.603 20:06:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:05.603 20:06:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.603 20:06:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.603 20:06:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.603 20:06:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:05.603 20:06:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.604 20:06:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.604 20:06:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.604 20:06:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:05.604 20:06:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.604 20:06:54 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:05.604 20:06:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.604 20:06:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.604 20:06:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:05.604 20:06:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.604 20:06:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.604 20:06:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.604 20:06:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:05.604 20:06:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.604 20:06:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.604 20:06:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.604 20:06:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:05.604 20:06:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.604 20:06:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.604 20:06:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.604 20:06:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:05.604 20:06:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.604 20:06:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.604 20:06:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.604 20:06:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:05.604 20:06:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.604 20:06:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.604 20:06:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.604 20:06:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:05.604 20:06:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.604 20:06:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.604 20:06:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.604 20:06:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:05.604 20:06:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.604 20:06:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.604 20:06:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.979 20:06:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:06.979 20:06:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.979 20:06:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.979 20:06:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.979 20:06:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:06.979 20:06:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.979 20:06:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.979 20:06:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.979 20:06:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:06.979 20:06:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.979 20:06:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.979 20:06:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.979 20:06:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:06.979 20:06:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.979 20:06:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.979 20:06:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.979 20:06:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:06.979 20:06:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.979 20:06:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.979 20:06:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.979 20:06:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:06.979 20:06:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.979 20:06:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.979 20:06:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.979 20:06:55 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:06.979 20:06:55 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:06.979 20:06:55 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:06.979 00:07:06.979 real 0m1.459s 00:07:06.979 user 0m1.237s 00:07:06.979 sys 0m0.122s 00:07:06.979 20:06:55 accel.accel_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:06.979 20:06:55 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:06.979 ************************************ 00:07:06.979 END TEST accel_crc32c_C2 00:07:06.979 ************************************ 00:07:06.979 20:06:55 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:06.979 20:06:55 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:06.979 20:06:55 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:06.979 20:06:55 accel -- common/autotest_common.sh@10 -- # set +x 00:07:06.979 ************************************ 00:07:06.979 START TEST accel_copy 00:07:06.979 ************************************ 00:07:06.979 20:06:55 accel.accel_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy -y 00:07:06.979 20:06:55 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:06.979 20:06:55 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:07:06.979 20:06:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.979 20:06:55 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:06.979 20:06:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.979 20:06:55 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:06.979 20:06:55 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:06.979 20:06:55 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:06.979 20:06:55 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:06.979 20:06:55 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.979 20:06:55 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.979 20:06:55 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:06.979 20:06:55 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:06.979 20:06:55 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:07:06.979 [2024-07-14 20:06:55.884412] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:06.979 [2024-07-14 20:06:55.885140] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76188 ] 00:07:06.979 [2024-07-14 20:06:56.023224] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.238 [2024-07-14 20:06:56.109141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.238 20:06:56 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:07.238 20:06:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:07.238 20:06:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:07.238 20:06:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:07.238 20:06:56 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:07.238 20:06:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:07.238 20:06:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:07.238 20:06:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:07.238 20:06:56 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:07:07.238 20:06:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:07.238 20:06:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:07.238 20:06:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:07.238 20:06:56 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:07.238 20:06:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:07.238 20:06:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:07.238 20:06:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:07.238 20:06:56 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:07.238 20:06:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:07.238 20:06:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:07.238 20:06:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:07.238 20:06:56 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:07:07.238 20:06:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:07.238 20:06:56 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:07:07.238 20:06:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:07.238 20:06:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:07.238 20:06:56 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:07.238 20:06:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:07.238 20:06:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:07.238 20:06:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:07.238 20:06:56 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:07.238 20:06:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:07.238 20:06:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:07.238 20:06:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:07.238 20:06:56 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:07:07.238 20:06:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:07.238 20:06:56 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:07.238 20:06:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:07.238 20:06:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:07.238 20:06:56 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:07.238 20:06:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:07.238 20:06:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:07.238 20:06:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:07.238 20:06:56 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:07.238 20:06:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:07.238 20:06:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:07.238 20:06:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:07.238 20:06:56 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:07:07.238 20:06:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:07.238 20:06:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:07.238 20:06:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:07.238 20:06:56 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:07.238 20:06:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:07.238 20:06:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:07.238 20:06:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:07.238 20:06:56 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:07:07.238 20:06:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:07.238 20:06:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:07.238 20:06:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:07.238 20:06:56 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:07.238 20:06:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:07.238 20:06:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:07.238 20:06:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:07.238 20:06:56 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:07.238 20:06:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:07.238 20:06:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:07.238 20:06:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.615 20:06:57 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:08.615 20:06:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.615 20:06:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.615 20:06:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.615 20:06:57 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:08.615 20:06:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.615 20:06:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.615 20:06:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.615 20:06:57 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:08.615 20:06:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.615 20:06:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.615 20:06:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.615 20:06:57 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:08.615 20:06:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.615 20:06:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.615 20:06:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.615 20:06:57 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:08.615 20:06:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.615 20:06:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.615 20:06:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.615 20:06:57 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:08.615 20:06:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.615 20:06:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.615 20:06:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.615 20:06:57 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:08.615 20:06:57 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:07:08.615 20:06:57 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:08.615 00:07:08.615 real 0m1.459s 00:07:08.615 user 0m1.230s 00:07:08.615 sys 0m0.129s 00:07:08.616 20:06:57 accel.accel_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:08.616 20:06:57 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:07:08.616 ************************************ 00:07:08.616 END TEST accel_copy 00:07:08.616 ************************************ 00:07:08.616 20:06:57 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:08.616 20:06:57 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:08.616 20:06:57 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:08.616 20:06:57 accel -- common/autotest_common.sh@10 -- # set +x 00:07:08.616 ************************************ 00:07:08.616 START TEST accel_fill 00:07:08.616 ************************************ 00:07:08.616 20:06:57 accel.accel_fill -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:08.616 20:06:57 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:07:08.616 20:06:57 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:07:08.616 20:06:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:08.616 20:06:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:08.616 20:06:57 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:08.616 20:06:57 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:08.616 20:06:57 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:07:08.616 20:06:57 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:08.616 20:06:57 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:08.616 20:06:57 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.616 20:06:57 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.616 20:06:57 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:08.616 20:06:57 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:07:08.616 20:06:57 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:07:08.616 [2024-07-14 20:06:57.398283] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:08.616 [2024-07-14 20:06:57.398406] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76223 ] 00:07:08.616 [2024-07-14 20:06:57.537960] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.616 [2024-07-14 20:06:57.604123] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.616 20:06:57 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:08.616 20:06:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:08.616 20:06:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:08.616 20:06:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:08.616 20:06:57 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:08.616 20:06:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:08.616 20:06:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:08.616 20:06:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:08.616 20:06:57 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:07:08.616 20:06:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:08.616 20:06:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:08.616 20:06:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:08.616 20:06:57 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:08.616 20:06:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:08.616 20:06:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:08.616 20:06:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:08.616 20:06:57 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:08.616 20:06:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:08.616 20:06:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:08.616 20:06:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:08.616 20:06:57 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:07:08.616 20:06:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:08.616 20:06:57 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:07:08.616 20:06:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:08.616 20:06:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:08.616 20:06:57 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:07:08.616 20:06:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:08.616 20:06:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:08.616 20:06:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:08.616 20:06:57 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:08.616 20:06:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:08.616 20:06:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:08.616 20:06:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:08.616 20:06:57 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:08.616 20:06:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:08.616 20:06:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:08.616 20:06:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:08.616 20:06:57 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:07:08.616 20:06:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:08.616 20:06:57 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:07:08.616 20:06:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:08.616 20:06:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:08.616 20:06:57 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:08.616 20:06:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:08.616 20:06:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:08.616 20:06:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:08.616 20:06:57 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:08.616 20:06:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:08.616 20:06:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:08.616 20:06:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:08.616 20:06:57 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:07:08.616 20:06:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:08.616 20:06:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:08.616 20:06:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:08.616 20:06:57 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:07:08.616 20:06:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:08.616 20:06:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:08.616 20:06:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:08.616 20:06:57 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:07:08.616 20:06:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:08.616 20:06:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:08.616 20:06:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:08.616 20:06:57 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:08.616 20:06:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:08.616 20:06:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:08.616 20:06:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:08.616 20:06:57 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:08.616 20:06:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:08.616 20:06:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:08.616 20:06:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:09.995 20:06:58 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:09.995 20:06:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:09.995 20:06:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:09.995 20:06:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:09.995 20:06:58 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:09.995 20:06:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:09.995 20:06:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:09.995 20:06:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:09.995 20:06:58 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:09.995 20:06:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:09.995 20:06:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:09.995 20:06:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:09.995 20:06:58 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:09.995 20:06:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:09.995 20:06:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:09.995 20:06:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:09.995 20:06:58 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:09.995 20:06:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:09.995 20:06:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:09.995 20:06:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:09.995 20:06:58 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:09.995 20:06:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:09.995 20:06:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:09.995 20:06:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:09.995 20:06:58 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:09.995 20:06:58 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:07:09.995 20:06:58 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:09.995 00:07:09.995 real 0m1.439s 00:07:09.995 user 0m1.218s 00:07:09.995 sys 0m0.123s 00:07:09.995 20:06:58 accel.accel_fill -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:09.995 20:06:58 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:07:09.995 ************************************ 00:07:09.995 END TEST accel_fill 00:07:09.995 ************************************ 00:07:09.995 20:06:58 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:09.995 20:06:58 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:09.995 20:06:58 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:09.995 20:06:58 accel -- common/autotest_common.sh@10 -- # set +x 00:07:09.995 ************************************ 00:07:09.995 START TEST accel_copy_crc32c 00:07:09.995 ************************************ 00:07:09.995 20:06:58 accel.accel_copy_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y 00:07:09.995 20:06:58 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:09.995 20:06:58 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:09.995 20:06:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.995 20:06:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.995 20:06:58 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:09.995 20:06:58 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:09.995 20:06:58 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:09.995 20:06:58 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:09.995 20:06:58 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:09.995 20:06:58 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.995 20:06:58 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.995 20:06:58 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:09.995 20:06:58 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:09.995 20:06:58 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:09.995 [2024-07-14 20:06:58.897468] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:09.995 [2024-07-14 20:06:58.897590] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76257 ] 00:07:09.995 [2024-07-14 20:06:59.035005] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.254 [2024-07-14 20:06:59.118957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.254 20:06:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:10.254 20:06:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.254 20:06:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.254 20:06:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.254 20:06:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:10.254 20:06:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.254 20:06:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.254 20:06:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.254 20:06:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:10.254 20:06:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.254 20:06:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.254 20:06:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.254 20:06:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:10.254 20:06:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.254 20:06:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.254 20:06:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.254 20:06:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:10.254 20:06:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.254 20:06:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.254 20:06:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.254 20:06:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:10.254 20:06:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.254 20:06:59 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:10.254 20:06:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.254 20:06:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.254 20:06:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:07:10.254 20:06:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.254 20:06:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.254 20:06:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.254 20:06:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:10.254 20:06:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.254 20:06:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.254 20:06:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.254 20:06:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:10.254 20:06:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.254 20:06:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.254 20:06:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.254 20:06:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:10.254 20:06:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.254 20:06:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.254 20:06:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.254 20:06:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:07:10.254 20:06:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.254 20:06:59 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:10.254 20:06:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.254 20:06:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.254 20:06:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:10.254 20:06:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.254 20:06:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.254 20:06:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.254 20:06:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:10.254 20:06:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.254 20:06:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.254 20:06:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.254 20:06:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:07:10.254 20:06:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.254 20:06:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.254 20:06:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.254 20:06:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:10.254 20:06:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.254 20:06:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.254 20:06:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.254 20:06:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:10.254 20:06:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.254 20:06:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.254 20:06:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.254 20:06:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:10.254 20:06:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.254 20:06:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.254 20:06:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.254 20:06:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:10.254 20:06:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.254 20:06:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.254 20:06:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.628 20:07:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:11.628 20:07:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.628 20:07:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.628 20:07:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.628 20:07:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:11.628 20:07:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.628 20:07:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.628 20:07:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.628 20:07:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:11.628 20:07:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.628 20:07:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.628 20:07:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.628 20:07:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:11.628 20:07:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.628 20:07:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.628 20:07:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.628 20:07:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:11.628 20:07:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.628 20:07:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.629 20:07:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.629 20:07:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:11.629 20:07:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.629 20:07:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.629 20:07:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.629 20:07:00 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:11.629 20:07:00 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:11.629 20:07:00 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:11.629 00:07:11.629 real 0m1.468s 00:07:11.629 user 0m1.250s 00:07:11.629 sys 0m0.119s 00:07:11.629 20:07:00 accel.accel_copy_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:11.629 ************************************ 00:07:11.629 END TEST accel_copy_crc32c 00:07:11.629 ************************************ 00:07:11.629 20:07:00 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:11.629 20:07:00 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:11.629 20:07:00 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:11.629 20:07:00 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:11.629 20:07:00 accel -- common/autotest_common.sh@10 -- # set +x 00:07:11.629 ************************************ 00:07:11.629 START TEST accel_copy_crc32c_C2 00:07:11.629 ************************************ 00:07:11.629 20:07:00 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:11.629 20:07:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:11.629 20:07:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:11.629 20:07:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.629 20:07:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.629 20:07:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:11.629 20:07:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:11.629 20:07:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:11.629 20:07:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:11.629 20:07:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:11.629 20:07:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.629 20:07:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.629 20:07:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:11.629 20:07:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:11.629 20:07:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:11.629 [2024-07-14 20:07:00.419540] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:11.629 [2024-07-14 20:07:00.419633] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76292 ] 00:07:11.629 [2024-07-14 20:07:00.559048] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.629 [2024-07-14 20:07:00.636904] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.629 20:07:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:11.629 20:07:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.629 20:07:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.629 20:07:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.629 20:07:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:11.629 20:07:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.629 20:07:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.629 20:07:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.629 20:07:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:11.629 20:07:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.629 20:07:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.629 20:07:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.629 20:07:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:11.629 20:07:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.629 20:07:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.629 20:07:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.629 20:07:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:11.629 20:07:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.629 20:07:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.629 20:07:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.629 20:07:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:11.629 20:07:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.629 20:07:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:11.629 20:07:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.629 20:07:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.629 20:07:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:11.629 20:07:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.629 20:07:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.629 20:07:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.629 20:07:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:11.629 20:07:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.629 20:07:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.629 20:07:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.629 20:07:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:11.629 20:07:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.629 20:07:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.629 20:07:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.629 20:07:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:11.888 20:07:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.888 20:07:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.888 20:07:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.888 20:07:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:11.888 20:07:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.888 20:07:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:11.888 20:07:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.888 20:07:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.888 20:07:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:11.888 20:07:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.888 20:07:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.888 20:07:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.888 20:07:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:11.888 20:07:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.888 20:07:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.888 20:07:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.888 20:07:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:11.888 20:07:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.888 20:07:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.888 20:07:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.888 20:07:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:11.888 20:07:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.888 20:07:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.888 20:07:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.888 20:07:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:11.888 20:07:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.888 20:07:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.888 20:07:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.888 20:07:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:11.888 20:07:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.888 20:07:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.888 20:07:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.888 20:07:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:11.888 20:07:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.888 20:07:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.888 20:07:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.827 20:07:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:12.827 20:07:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.827 20:07:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.827 20:07:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.827 20:07:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:12.827 20:07:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.827 20:07:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.827 20:07:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.827 20:07:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:12.827 20:07:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.827 20:07:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.827 20:07:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.827 20:07:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:12.827 20:07:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.827 20:07:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.827 20:07:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.827 20:07:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:12.827 20:07:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.827 20:07:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.827 20:07:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.827 20:07:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:12.827 20:07:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.827 20:07:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.827 20:07:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.827 20:07:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:12.827 20:07:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:12.827 20:07:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:12.827 00:07:12.827 real 0m1.459s 00:07:12.827 user 0m1.230s 00:07:12.827 sys 0m0.134s 00:07:12.827 20:07:01 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:12.827 ************************************ 00:07:12.827 END TEST accel_copy_crc32c_C2 00:07:12.827 ************************************ 00:07:12.827 20:07:01 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:12.827 20:07:01 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:12.827 20:07:01 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:12.827 20:07:01 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:12.827 20:07:01 accel -- common/autotest_common.sh@10 -- # set +x 00:07:12.827 ************************************ 00:07:12.827 START TEST accel_dualcast 00:07:12.827 ************************************ 00:07:12.827 20:07:01 accel.accel_dualcast -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dualcast -y 00:07:12.827 20:07:01 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:07:12.827 20:07:01 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:07:13.099 20:07:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:13.100 20:07:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:13.100 20:07:01 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:13.100 20:07:01 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:07:13.100 20:07:01 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:13.100 20:07:01 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:13.100 20:07:01 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:13.100 20:07:01 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.100 20:07:01 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.100 20:07:01 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:13.100 20:07:01 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:07:13.100 20:07:01 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:07:13.100 [2024-07-14 20:07:01.932578] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:13.100 [2024-07-14 20:07:01.932671] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76326 ] 00:07:13.100 [2024-07-14 20:07:02.069007] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.100 [2024-07-14 20:07:02.134131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.357 20:07:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:13.357 20:07:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:13.357 20:07:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:13.357 20:07:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:13.357 20:07:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:13.357 20:07:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:13.357 20:07:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:13.357 20:07:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:13.357 20:07:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:07:13.357 20:07:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:13.357 20:07:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:13.357 20:07:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:13.357 20:07:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:13.357 20:07:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:13.357 20:07:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:13.357 20:07:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:13.357 20:07:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:13.357 20:07:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:13.357 20:07:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:13.357 20:07:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:13.357 20:07:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:07:13.357 20:07:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:13.357 20:07:02 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:13.357 20:07:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:13.357 20:07:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:13.357 20:07:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:13.357 20:07:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:13.357 20:07:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:13.357 20:07:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:13.357 20:07:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:13.357 20:07:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:13.357 20:07:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:13.357 20:07:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:13.357 20:07:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:07:13.357 20:07:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:13.357 20:07:02 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:07:13.357 20:07:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:13.357 20:07:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:13.357 20:07:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:13.357 20:07:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:13.357 20:07:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:13.357 20:07:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:13.357 20:07:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:13.357 20:07:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:13.357 20:07:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:13.357 20:07:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:13.357 20:07:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:07:13.357 20:07:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:13.357 20:07:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:13.357 20:07:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:13.357 20:07:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:07:13.357 20:07:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:13.357 20:07:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:13.357 20:07:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:13.357 20:07:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:07:13.357 20:07:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:13.357 20:07:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:13.357 20:07:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:13.357 20:07:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:13.357 20:07:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:13.357 20:07:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:13.358 20:07:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:13.358 20:07:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:13.358 20:07:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:13.358 20:07:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:13.358 20:07:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:14.292 20:07:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:14.292 20:07:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:14.292 20:07:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:14.292 20:07:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:14.292 20:07:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:14.292 20:07:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:14.292 20:07:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:14.292 20:07:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:14.292 20:07:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:14.292 20:07:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:14.292 20:07:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:14.292 20:07:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:14.292 20:07:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:14.292 20:07:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:14.292 20:07:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:14.292 20:07:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:14.292 20:07:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:14.292 20:07:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:14.292 20:07:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:14.292 20:07:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:14.292 20:07:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:14.292 20:07:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:14.292 20:07:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:14.292 20:07:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:14.292 20:07:03 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:14.292 20:07:03 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:14.292 20:07:03 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:14.292 00:07:14.292 real 0m1.442s 00:07:14.292 user 0m1.222s 00:07:14.292 sys 0m0.120s 00:07:14.292 20:07:03 accel.accel_dualcast -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:14.292 ************************************ 00:07:14.292 END TEST accel_dualcast 00:07:14.292 ************************************ 00:07:14.292 20:07:03 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:07:14.550 20:07:03 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:14.550 20:07:03 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:14.550 20:07:03 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:14.550 20:07:03 accel -- common/autotest_common.sh@10 -- # set +x 00:07:14.550 ************************************ 00:07:14.550 START TEST accel_compare 00:07:14.550 ************************************ 00:07:14.550 20:07:03 accel.accel_compare -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compare -y 00:07:14.550 20:07:03 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:07:14.550 20:07:03 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:07:14.550 20:07:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:14.550 20:07:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:14.550 20:07:03 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:14.550 20:07:03 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:14.550 20:07:03 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:07:14.550 20:07:03 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:14.550 20:07:03 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:14.550 20:07:03 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:14.550 20:07:03 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:14.550 20:07:03 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:14.550 20:07:03 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:07:14.550 20:07:03 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:07:14.550 [2024-07-14 20:07:03.430875] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:14.550 [2024-07-14 20:07:03.430973] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76361 ] 00:07:14.550 [2024-07-14 20:07:03.566667] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.550 [2024-07-14 20:07:03.623834] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.808 20:07:03 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:14.808 20:07:03 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:14.808 20:07:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:14.808 20:07:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:14.808 20:07:03 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:14.808 20:07:03 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:14.808 20:07:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:14.808 20:07:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:14.808 20:07:03 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:07:14.808 20:07:03 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:14.808 20:07:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:14.808 20:07:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:14.808 20:07:03 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:14.808 20:07:03 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:14.808 20:07:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:14.808 20:07:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:14.808 20:07:03 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:14.808 20:07:03 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:14.808 20:07:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:14.808 20:07:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:14.808 20:07:03 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:07:14.808 20:07:03 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:14.808 20:07:03 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:07:14.808 20:07:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:14.808 20:07:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:14.808 20:07:03 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:14.808 20:07:03 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:14.808 20:07:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:14.808 20:07:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:14.808 20:07:03 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:14.808 20:07:03 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:14.808 20:07:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:14.808 20:07:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:14.808 20:07:03 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:07:14.808 20:07:03 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:14.808 20:07:03 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:07:14.808 20:07:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:14.808 20:07:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:14.808 20:07:03 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:14.808 20:07:03 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:14.808 20:07:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:14.808 20:07:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:14.808 20:07:03 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:14.808 20:07:03 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:14.808 20:07:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:14.808 20:07:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:14.808 20:07:03 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:07:14.808 20:07:03 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:14.808 20:07:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:14.808 20:07:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:14.808 20:07:03 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:07:14.808 20:07:03 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:14.808 20:07:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:14.808 20:07:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:14.808 20:07:03 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:07:14.808 20:07:03 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:14.808 20:07:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:14.808 20:07:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:14.808 20:07:03 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:14.808 20:07:03 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:14.808 20:07:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:14.808 20:07:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:14.808 20:07:03 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:14.808 20:07:03 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:14.808 20:07:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:14.808 20:07:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:15.741 20:07:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:15.741 20:07:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:15.741 20:07:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:15.741 20:07:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:15.741 20:07:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:15.741 20:07:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:15.741 20:07:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:15.741 20:07:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:15.741 20:07:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:15.741 20:07:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:15.741 20:07:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:15.741 20:07:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:15.741 20:07:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:15.741 20:07:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:15.741 20:07:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:15.741 20:07:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:15.998 20:07:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:15.998 20:07:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:15.998 20:07:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:15.998 20:07:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:15.998 20:07:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:15.998 20:07:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:15.998 20:07:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:15.998 20:07:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:15.998 20:07:04 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:15.998 20:07:04 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:15.998 20:07:04 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:15.998 00:07:15.998 real 0m1.424s 00:07:15.999 user 0m1.205s 00:07:15.999 sys 0m0.121s 00:07:15.999 20:07:04 accel.accel_compare -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:15.999 20:07:04 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:07:15.999 ************************************ 00:07:15.999 END TEST accel_compare 00:07:15.999 ************************************ 00:07:15.999 20:07:04 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:15.999 20:07:04 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:15.999 20:07:04 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:15.999 20:07:04 accel -- common/autotest_common.sh@10 -- # set +x 00:07:15.999 ************************************ 00:07:15.999 START TEST accel_xor 00:07:15.999 ************************************ 00:07:15.999 20:07:04 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y 00:07:15.999 20:07:04 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:15.999 20:07:04 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:15.999 20:07:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:15.999 20:07:04 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:15.999 20:07:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:15.999 20:07:04 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:15.999 20:07:04 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:15.999 20:07:04 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:15.999 20:07:04 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:15.999 20:07:04 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:15.999 20:07:04 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.999 20:07:04 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:15.999 20:07:04 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:15.999 20:07:04 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:15.999 [2024-07-14 20:07:04.903379] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:15.999 [2024-07-14 20:07:04.903490] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76397 ] 00:07:15.999 [2024-07-14 20:07:05.042345] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.257 [2024-07-14 20:07:05.117781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.257 20:07:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:16.257 20:07:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.257 20:07:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.257 20:07:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.257 20:07:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:16.257 20:07:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.257 20:07:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.257 20:07:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.257 20:07:05 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:16.257 20:07:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.257 20:07:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.257 20:07:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.257 20:07:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:16.257 20:07:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.257 20:07:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.257 20:07:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.257 20:07:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:16.257 20:07:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.257 20:07:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.257 20:07:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.257 20:07:05 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:16.257 20:07:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.257 20:07:05 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:16.257 20:07:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.257 20:07:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.257 20:07:05 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:07:16.257 20:07:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.257 20:07:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.257 20:07:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.257 20:07:05 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:16.257 20:07:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.257 20:07:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.257 20:07:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.257 20:07:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:16.257 20:07:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.257 20:07:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.257 20:07:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.257 20:07:05 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:16.257 20:07:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.257 20:07:05 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:16.257 20:07:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.257 20:07:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.257 20:07:05 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:16.257 20:07:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.257 20:07:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.257 20:07:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.257 20:07:05 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:16.257 20:07:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.257 20:07:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.257 20:07:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.257 20:07:05 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:16.257 20:07:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.257 20:07:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.257 20:07:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.257 20:07:05 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:16.257 20:07:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.257 20:07:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.257 20:07:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.257 20:07:05 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:16.257 20:07:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.257 20:07:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.257 20:07:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.257 20:07:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:16.257 20:07:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.257 20:07:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.257 20:07:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.257 20:07:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:16.257 20:07:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.257 20:07:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.257 20:07:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:17.631 20:07:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:17.631 20:07:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:17.631 20:07:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:17.631 20:07:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:17.631 20:07:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:17.631 20:07:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:17.631 20:07:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:17.631 20:07:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:17.631 20:07:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:17.631 20:07:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:17.631 20:07:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:17.631 20:07:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:17.631 20:07:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:17.631 20:07:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:17.631 20:07:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:17.631 20:07:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:17.631 20:07:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:17.631 20:07:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:17.631 20:07:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:17.631 20:07:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:17.631 20:07:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:17.631 20:07:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:17.631 20:07:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:17.631 20:07:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:17.631 20:07:06 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:17.631 20:07:06 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:17.631 20:07:06 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:17.631 00:07:17.631 real 0m1.487s 00:07:17.631 user 0m1.258s 00:07:17.631 sys 0m0.131s 00:07:17.631 20:07:06 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:17.631 ************************************ 00:07:17.631 END TEST accel_xor 00:07:17.631 ************************************ 00:07:17.631 20:07:06 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:17.631 20:07:06 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:17.631 20:07:06 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:17.631 20:07:06 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:17.631 20:07:06 accel -- common/autotest_common.sh@10 -- # set +x 00:07:17.631 ************************************ 00:07:17.631 START TEST accel_xor 00:07:17.631 ************************************ 00:07:17.631 20:07:06 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y -x 3 00:07:17.631 20:07:06 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:17.631 20:07:06 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:17.631 20:07:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:17.631 20:07:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:17.631 20:07:06 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:17.631 20:07:06 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:17.631 20:07:06 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:17.631 20:07:06 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:17.631 20:07:06 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:17.631 20:07:06 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:17.631 20:07:06 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.631 20:07:06 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:17.631 20:07:06 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:17.631 20:07:06 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:17.631 [2024-07-14 20:07:06.442072] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:17.631 [2024-07-14 20:07:06.442151] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76432 ] 00:07:17.631 [2024-07-14 20:07:06.575306] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.631 [2024-07-14 20:07:06.669109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.889 20:07:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:17.889 20:07:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:17.889 20:07:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:17.889 20:07:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:17.889 20:07:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:17.889 20:07:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:17.889 20:07:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:17.889 20:07:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:17.889 20:07:06 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:17.889 20:07:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:17.889 20:07:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:17.889 20:07:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:17.889 20:07:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:17.889 20:07:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:17.889 20:07:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:17.889 20:07:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:17.889 20:07:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:17.889 20:07:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:17.889 20:07:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:17.889 20:07:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:17.889 20:07:06 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:17.889 20:07:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:17.889 20:07:06 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:17.889 20:07:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:17.889 20:07:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:17.889 20:07:06 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:07:17.889 20:07:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:17.889 20:07:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:17.889 20:07:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:17.889 20:07:06 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:17.889 20:07:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:17.889 20:07:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:17.889 20:07:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:17.889 20:07:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:17.889 20:07:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:17.889 20:07:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:17.889 20:07:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:17.889 20:07:06 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:17.889 20:07:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:17.889 20:07:06 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:17.889 20:07:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:17.889 20:07:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:17.889 20:07:06 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:17.889 20:07:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:17.889 20:07:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:17.889 20:07:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:17.889 20:07:06 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:17.889 20:07:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:17.889 20:07:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:17.889 20:07:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:17.889 20:07:06 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:17.889 20:07:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:17.889 20:07:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:17.889 20:07:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:17.889 20:07:06 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:17.889 20:07:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:17.889 20:07:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:17.889 20:07:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:17.889 20:07:06 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:17.889 20:07:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:17.889 20:07:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:17.889 20:07:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:17.889 20:07:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:17.889 20:07:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:17.889 20:07:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:17.889 20:07:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:17.889 20:07:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:17.889 20:07:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:17.889 20:07:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:17.889 20:07:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:18.823 20:07:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:18.823 20:07:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:18.823 20:07:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:18.823 20:07:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:18.823 20:07:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:18.823 20:07:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:18.823 20:07:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:18.823 20:07:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:18.823 20:07:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:18.823 20:07:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:18.823 20:07:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:18.823 20:07:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:18.823 20:07:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:18.823 20:07:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:18.823 20:07:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:18.823 20:07:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:18.823 20:07:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:18.823 20:07:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:18.823 20:07:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:18.823 20:07:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:18.823 ************************************ 00:07:18.823 END TEST accel_xor 00:07:18.823 ************************************ 00:07:18.823 20:07:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:18.823 20:07:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:18.823 20:07:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:18.823 20:07:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:18.823 20:07:07 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:18.823 20:07:07 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:18.823 20:07:07 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:18.823 00:07:18.823 real 0m1.458s 00:07:18.823 user 0m1.253s 00:07:18.823 sys 0m0.115s 00:07:18.823 20:07:07 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:18.823 20:07:07 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:19.082 20:07:07 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:19.082 20:07:07 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:07:19.082 20:07:07 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:19.082 20:07:07 accel -- common/autotest_common.sh@10 -- # set +x 00:07:19.082 ************************************ 00:07:19.082 START TEST accel_dif_verify 00:07:19.082 ************************************ 00:07:19.082 20:07:07 accel.accel_dif_verify -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_verify 00:07:19.082 20:07:07 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:07:19.082 20:07:07 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:07:19.082 20:07:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.082 20:07:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.082 20:07:07 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:19.082 20:07:07 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:19.082 20:07:07 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:19.082 20:07:07 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:19.082 20:07:07 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:19.082 20:07:07 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.082 20:07:07 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.082 20:07:07 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:19.082 20:07:07 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:19.082 20:07:07 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:07:19.082 [2024-07-14 20:07:07.952284] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:19.082 [2024-07-14 20:07:07.952388] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76465 ] 00:07:19.082 [2024-07-14 20:07:08.081253] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.082 [2024-07-14 20:07:08.158065] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.340 20:07:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:19.340 20:07:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.340 20:07:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.340 20:07:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.340 20:07:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:19.340 20:07:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.340 20:07:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.340 20:07:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.340 20:07:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:07:19.340 20:07:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.340 20:07:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.340 20:07:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.340 20:07:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:19.340 20:07:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.341 20:07:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.341 20:07:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.341 20:07:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:19.341 20:07:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.341 20:07:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.341 20:07:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.341 20:07:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:07:19.341 20:07:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.341 20:07:08 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:19.341 20:07:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.341 20:07:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.341 20:07:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:19.341 20:07:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.341 20:07:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.341 20:07:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.341 20:07:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:19.341 20:07:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.341 20:07:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.341 20:07:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.341 20:07:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:07:19.341 20:07:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.341 20:07:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.341 20:07:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.341 20:07:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:07:19.341 20:07:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.341 20:07:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.341 20:07:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.341 20:07:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:19.341 20:07:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.341 20:07:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.341 20:07:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.341 20:07:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:07:19.341 20:07:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.341 20:07:08 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:07:19.341 20:07:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.341 20:07:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.341 20:07:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:19.341 20:07:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.341 20:07:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.341 20:07:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.341 20:07:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:19.341 20:07:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.341 20:07:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.341 20:07:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.341 20:07:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:07:19.341 20:07:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.341 20:07:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.341 20:07:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.341 20:07:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:07:19.341 20:07:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.341 20:07:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.341 20:07:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.341 20:07:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:07:19.341 20:07:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.341 20:07:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.341 20:07:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.341 20:07:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:19.341 20:07:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.341 20:07:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.341 20:07:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.341 20:07:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:19.341 20:07:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.341 20:07:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.341 20:07:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:20.718 20:07:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:20.718 20:07:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:20.718 20:07:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:20.718 20:07:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:20.718 20:07:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:20.718 20:07:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:20.718 20:07:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:20.718 20:07:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:20.718 20:07:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:20.718 20:07:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:20.718 20:07:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:20.718 20:07:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:20.718 20:07:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:20.718 20:07:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:20.718 20:07:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:20.718 20:07:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:20.718 20:07:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:20.718 20:07:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:20.718 20:07:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:20.718 20:07:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:20.718 20:07:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:20.718 20:07:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:20.718 20:07:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:20.718 20:07:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:20.718 20:07:09 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:20.718 20:07:09 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:07:20.718 ************************************ 00:07:20.718 END TEST accel_dif_verify 00:07:20.718 ************************************ 00:07:20.718 20:07:09 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:20.718 00:07:20.718 real 0m1.444s 00:07:20.718 user 0m1.234s 00:07:20.718 sys 0m0.121s 00:07:20.718 20:07:09 accel.accel_dif_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:20.718 20:07:09 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:07:20.718 20:07:09 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:20.718 20:07:09 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:07:20.718 20:07:09 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:20.718 20:07:09 accel -- common/autotest_common.sh@10 -- # set +x 00:07:20.718 ************************************ 00:07:20.718 START TEST accel_dif_generate 00:07:20.718 ************************************ 00:07:20.718 20:07:09 accel.accel_dif_generate -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate 00:07:20.718 20:07:09 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:07:20.718 20:07:09 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:07:20.718 20:07:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.718 20:07:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.718 20:07:09 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:20.718 20:07:09 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:20.718 20:07:09 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:07:20.718 20:07:09 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:20.718 20:07:09 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:20.718 20:07:09 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.718 20:07:09 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.718 20:07:09 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:20.718 20:07:09 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:07:20.718 20:07:09 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:07:20.718 [2024-07-14 20:07:09.453536] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:20.718 [2024-07-14 20:07:09.453627] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76501 ] 00:07:20.718 [2024-07-14 20:07:09.592343] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.718 [2024-07-14 20:07:09.683042] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.718 20:07:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:20.718 20:07:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.718 20:07:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.718 20:07:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.718 20:07:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:20.718 20:07:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.718 20:07:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.718 20:07:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.718 20:07:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:07:20.718 20:07:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.718 20:07:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.718 20:07:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.718 20:07:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:20.718 20:07:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.718 20:07:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.718 20:07:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.718 20:07:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:20.718 20:07:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.718 20:07:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.718 20:07:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.718 20:07:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:07:20.718 20:07:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.718 20:07:09 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:07:20.718 20:07:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.718 20:07:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.718 20:07:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:20.718 20:07:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.718 20:07:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.718 20:07:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.718 20:07:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:20.718 20:07:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.718 20:07:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.718 20:07:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.718 20:07:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:07:20.718 20:07:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.718 20:07:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.718 20:07:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.718 20:07:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:07:20.718 20:07:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.718 20:07:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.718 20:07:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.718 20:07:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:20.718 20:07:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.718 20:07:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.718 20:07:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.718 20:07:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:07:20.718 20:07:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.718 20:07:09 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:07:20.718 20:07:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.718 20:07:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.719 20:07:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:20.719 20:07:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.719 20:07:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.719 20:07:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.719 20:07:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:20.719 20:07:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.719 20:07:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.719 20:07:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.719 20:07:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:07:20.719 20:07:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.719 20:07:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.719 20:07:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.719 20:07:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:07:20.719 20:07:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.719 20:07:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.719 20:07:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.719 20:07:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:07:20.719 20:07:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.719 20:07:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.719 20:07:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.719 20:07:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:20.719 20:07:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.719 20:07:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.719 20:07:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.719 20:07:09 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:20.719 20:07:09 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.719 20:07:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.719 20:07:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:22.094 20:07:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:22.094 20:07:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:22.094 20:07:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:22.094 20:07:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:22.094 20:07:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:22.094 20:07:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:22.094 20:07:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:22.094 20:07:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:22.094 20:07:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:22.094 20:07:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:22.094 20:07:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:22.094 20:07:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:22.094 20:07:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:22.094 20:07:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:22.094 20:07:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:22.094 20:07:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:22.094 20:07:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:22.095 20:07:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:22.095 20:07:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:22.095 20:07:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:22.095 20:07:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:22.095 ************************************ 00:07:22.095 END TEST accel_dif_generate 00:07:22.095 ************************************ 00:07:22.095 20:07:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:22.095 20:07:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:22.095 20:07:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:22.095 20:07:10 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:22.095 20:07:10 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:22.095 20:07:10 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:22.095 00:07:22.095 real 0m1.459s 00:07:22.095 user 0m1.254s 00:07:22.095 sys 0m0.115s 00:07:22.095 20:07:10 accel.accel_dif_generate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:22.095 20:07:10 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:07:22.095 20:07:10 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:22.095 20:07:10 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:07:22.095 20:07:10 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:22.095 20:07:10 accel -- common/autotest_common.sh@10 -- # set +x 00:07:22.095 ************************************ 00:07:22.095 START TEST accel_dif_generate_copy 00:07:22.095 ************************************ 00:07:22.095 20:07:10 accel.accel_dif_generate_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate_copy 00:07:22.095 20:07:10 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:22.095 20:07:10 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:07:22.095 20:07:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.095 20:07:10 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:22.095 20:07:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.095 20:07:10 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:22.095 20:07:10 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:22.095 20:07:10 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:22.095 20:07:10 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:22.095 20:07:10 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:22.095 20:07:10 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:22.095 20:07:10 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:22.095 20:07:10 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:22.095 20:07:10 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:07:22.095 [2024-07-14 20:07:10.961885] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:22.095 [2024-07-14 20:07:10.962021] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76530 ] 00:07:22.095 [2024-07-14 20:07:11.098132] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.354 [2024-07-14 20:07:11.185692] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.354 20:07:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:22.354 20:07:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.354 20:07:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.354 20:07:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.354 20:07:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:22.354 20:07:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.354 20:07:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.354 20:07:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.354 20:07:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:07:22.354 20:07:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.354 20:07:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.354 20:07:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.354 20:07:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:22.354 20:07:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.354 20:07:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.354 20:07:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.354 20:07:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:22.354 20:07:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.354 20:07:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.354 20:07:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.354 20:07:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:22.354 20:07:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.354 20:07:11 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:22.354 20:07:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.354 20:07:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.354 20:07:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:22.354 20:07:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.354 20:07:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.354 20:07:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.354 20:07:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:22.354 20:07:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.354 20:07:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.354 20:07:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.354 20:07:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:22.354 20:07:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.354 20:07:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.354 20:07:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.354 20:07:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:07:22.354 20:07:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.354 20:07:11 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:22.354 20:07:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.354 20:07:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.354 20:07:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:22.354 20:07:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.354 20:07:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.354 20:07:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.354 20:07:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:22.354 20:07:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.354 20:07:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.354 20:07:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.354 20:07:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:07:22.354 20:07:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.354 20:07:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.354 20:07:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.354 20:07:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:22.354 20:07:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.354 20:07:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.354 20:07:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.354 20:07:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:07:22.354 20:07:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.354 20:07:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.354 20:07:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.354 20:07:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:22.354 20:07:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.354 20:07:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.354 20:07:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.354 20:07:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:22.354 20:07:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.354 20:07:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.354 20:07:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:23.736 20:07:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:23.736 20:07:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:23.736 20:07:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:23.736 20:07:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:23.736 20:07:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:23.736 20:07:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:23.736 20:07:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:23.736 20:07:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:23.736 20:07:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:23.736 20:07:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:23.736 20:07:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:23.736 20:07:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:23.736 20:07:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:23.736 20:07:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:23.736 20:07:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:23.736 20:07:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:23.736 20:07:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:23.736 20:07:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:23.736 20:07:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:23.736 20:07:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:23.736 20:07:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:23.736 20:07:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:23.736 20:07:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:23.736 20:07:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:23.736 20:07:12 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:23.736 20:07:12 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:23.736 20:07:12 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:23.736 00:07:23.736 real 0m1.472s 00:07:23.736 user 0m1.255s 00:07:23.736 sys 0m0.122s 00:07:23.736 20:07:12 accel.accel_dif_generate_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:23.736 20:07:12 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:07:23.736 ************************************ 00:07:23.736 END TEST accel_dif_generate_copy 00:07:23.736 ************************************ 00:07:23.736 20:07:12 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:07:23.736 20:07:12 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:23.736 20:07:12 accel -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:07:23.736 20:07:12 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:23.736 20:07:12 accel -- common/autotest_common.sh@10 -- # set +x 00:07:23.736 ************************************ 00:07:23.736 START TEST accel_comp 00:07:23.736 ************************************ 00:07:23.736 20:07:12 accel.accel_comp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:23.736 20:07:12 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:07:23.736 20:07:12 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:07:23.736 20:07:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:23.736 20:07:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:23.736 20:07:12 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:23.736 20:07:12 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:23.736 20:07:12 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:07:23.736 20:07:12 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:23.736 20:07:12 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:23.736 20:07:12 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:23.736 20:07:12 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.736 20:07:12 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:23.736 20:07:12 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:07:23.736 20:07:12 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:07:23.736 [2024-07-14 20:07:12.488512] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:23.736 [2024-07-14 20:07:12.488604] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76570 ] 00:07:23.736 [2024-07-14 20:07:12.627780] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.736 [2024-07-14 20:07:12.700417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.736 20:07:12 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:23.736 20:07:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.736 20:07:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:23.736 20:07:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:23.736 20:07:12 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:23.736 20:07:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.736 20:07:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:23.736 20:07:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:23.736 20:07:12 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:23.736 20:07:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.736 20:07:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:23.736 20:07:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:23.736 20:07:12 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:07:23.736 20:07:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.736 20:07:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:23.736 20:07:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:23.736 20:07:12 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:23.736 20:07:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.736 20:07:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:23.736 20:07:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:23.736 20:07:12 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:23.736 20:07:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.736 20:07:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:23.736 20:07:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:23.736 20:07:12 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:07:23.736 20:07:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.736 20:07:12 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:07:23.736 20:07:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:23.736 20:07:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:23.736 20:07:12 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:23.736 20:07:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.736 20:07:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:23.736 20:07:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:23.736 20:07:12 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:23.736 20:07:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.736 20:07:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:23.736 20:07:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:23.736 20:07:12 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:07:23.736 20:07:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.736 20:07:12 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:07:23.736 20:07:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:23.736 20:07:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:23.736 20:07:12 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:23.736 20:07:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.736 20:07:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:23.736 20:07:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:23.736 20:07:12 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:23.736 20:07:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.736 20:07:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:23.736 20:07:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:23.736 20:07:12 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:23.736 20:07:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.736 20:07:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:23.736 20:07:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:23.736 20:07:12 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:07:23.736 20:07:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.736 20:07:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:23.736 20:07:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:23.736 20:07:12 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:23.736 20:07:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.736 20:07:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:23.736 20:07:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:23.736 20:07:12 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:07:23.736 20:07:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.736 20:07:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:23.736 20:07:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:23.736 20:07:12 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:23.736 20:07:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.736 20:07:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:23.736 20:07:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:23.736 20:07:12 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:23.736 20:07:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.736 20:07:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:23.736 20:07:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:25.115 20:07:13 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:25.115 20:07:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:25.115 20:07:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:25.115 20:07:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:25.115 20:07:13 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:25.115 20:07:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:25.115 20:07:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:25.115 20:07:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:25.115 20:07:13 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:25.115 20:07:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:25.115 20:07:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:25.115 20:07:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:25.115 20:07:13 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:25.115 20:07:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:25.115 20:07:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:25.115 20:07:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:25.115 ************************************ 00:07:25.115 END TEST accel_comp 00:07:25.115 ************************************ 00:07:25.115 20:07:13 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:25.115 20:07:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:25.115 20:07:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:25.115 20:07:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:25.115 20:07:13 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:25.115 20:07:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:25.115 20:07:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:25.115 20:07:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:25.115 20:07:13 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:25.115 20:07:13 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:25.115 20:07:13 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:25.115 00:07:25.115 real 0m1.461s 00:07:25.115 user 0m1.246s 00:07:25.115 sys 0m0.124s 00:07:25.115 20:07:13 accel.accel_comp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:25.115 20:07:13 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:07:25.115 20:07:13 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:25.115 20:07:13 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:25.115 20:07:13 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:25.115 20:07:13 accel -- common/autotest_common.sh@10 -- # set +x 00:07:25.115 ************************************ 00:07:25.115 START TEST accel_decomp 00:07:25.115 ************************************ 00:07:25.115 20:07:13 accel.accel_decomp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:25.115 20:07:13 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:25.115 20:07:13 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:25.115 20:07:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:25.115 20:07:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:25.115 20:07:13 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:25.115 20:07:13 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:25.115 20:07:13 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:25.115 20:07:13 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:25.115 20:07:13 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:25.115 20:07:13 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.115 20:07:13 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.115 20:07:13 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:25.115 20:07:13 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:07:25.115 20:07:13 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:07:25.115 [2024-07-14 20:07:14.004762] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:25.116 [2024-07-14 20:07:14.005076] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76599 ] 00:07:25.116 [2024-07-14 20:07:14.143271] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.374 [2024-07-14 20:07:14.242259] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.374 20:07:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:25.374 20:07:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:25.374 20:07:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:25.374 20:07:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:25.374 20:07:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:25.374 20:07:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:25.374 20:07:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:25.374 20:07:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:25.374 20:07:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:25.374 20:07:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:25.374 20:07:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:25.374 20:07:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:25.374 20:07:14 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:25.374 20:07:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:25.374 20:07:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:25.374 20:07:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:25.374 20:07:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:25.374 20:07:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:25.374 20:07:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:25.374 20:07:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:25.374 20:07:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:25.374 20:07:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:25.375 20:07:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:25.375 20:07:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:25.375 20:07:14 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:25.375 20:07:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:25.375 20:07:14 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:25.375 20:07:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:25.375 20:07:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:25.375 20:07:14 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:25.375 20:07:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:25.375 20:07:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:25.375 20:07:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:25.375 20:07:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:25.375 20:07:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:25.375 20:07:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:25.375 20:07:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:25.375 20:07:14 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:07:25.375 20:07:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:25.375 20:07:14 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:07:25.375 20:07:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:25.375 20:07:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:25.375 20:07:14 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:25.375 20:07:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:25.375 20:07:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:25.375 20:07:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:25.375 20:07:14 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:25.375 20:07:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:25.375 20:07:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:25.375 20:07:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:25.375 20:07:14 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:25.375 20:07:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:25.375 20:07:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:25.375 20:07:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:25.375 20:07:14 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:07:25.375 20:07:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:25.375 20:07:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:25.375 20:07:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:25.375 20:07:14 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:25.375 20:07:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:25.375 20:07:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:25.375 20:07:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:25.375 20:07:14 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:25.375 20:07:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:25.375 20:07:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:25.375 20:07:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:25.375 20:07:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:25.375 20:07:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:25.375 20:07:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:25.375 20:07:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:25.375 20:07:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:25.375 20:07:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:25.375 20:07:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:25.375 20:07:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:26.752 20:07:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:26.752 20:07:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.752 20:07:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:26.752 20:07:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:26.752 20:07:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:26.752 20:07:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.752 20:07:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:26.752 20:07:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:26.752 20:07:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:26.752 20:07:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.752 20:07:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:26.752 20:07:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:26.752 20:07:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:26.752 20:07:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.752 20:07:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:26.752 20:07:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:26.752 20:07:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:26.752 20:07:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.752 20:07:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:26.752 20:07:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:26.752 20:07:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:26.752 20:07:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.752 20:07:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:26.752 20:07:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:26.752 20:07:15 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:26.752 20:07:15 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:26.752 20:07:15 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:26.752 ************************************ 00:07:26.752 END TEST accel_decomp 00:07:26.752 ************************************ 00:07:26.752 00:07:26.752 real 0m1.487s 00:07:26.752 user 0m1.279s 00:07:26.752 sys 0m0.112s 00:07:26.752 20:07:15 accel.accel_decomp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:26.752 20:07:15 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:26.752 20:07:15 accel -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:26.752 20:07:15 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:26.752 20:07:15 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:26.752 20:07:15 accel -- common/autotest_common.sh@10 -- # set +x 00:07:26.752 ************************************ 00:07:26.752 START TEST accel_decmop_full 00:07:26.752 ************************************ 00:07:26.752 20:07:15 accel.accel_decmop_full -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:26.752 20:07:15 accel.accel_decmop_full -- accel/accel.sh@16 -- # local accel_opc 00:07:26.752 20:07:15 accel.accel_decmop_full -- accel/accel.sh@17 -- # local accel_module 00:07:26.752 20:07:15 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:26.752 20:07:15 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:26.752 20:07:15 accel.accel_decmop_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:26.752 20:07:15 accel.accel_decmop_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:26.752 20:07:15 accel.accel_decmop_full -- accel/accel.sh@12 -- # build_accel_config 00:07:26.752 20:07:15 accel.accel_decmop_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:26.752 20:07:15 accel.accel_decmop_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:26.752 20:07:15 accel.accel_decmop_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:26.752 20:07:15 accel.accel_decmop_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:26.752 20:07:15 accel.accel_decmop_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:26.752 20:07:15 accel.accel_decmop_full -- accel/accel.sh@40 -- # local IFS=, 00:07:26.752 20:07:15 accel.accel_decmop_full -- accel/accel.sh@41 -- # jq -r . 00:07:26.752 [2024-07-14 20:07:15.544553] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:26.752 [2024-07-14 20:07:15.544658] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76639 ] 00:07:26.752 [2024-07-14 20:07:15.683503] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.752 [2024-07-14 20:07:15.786572] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.011 20:07:15 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:27.011 20:07:15 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:27.011 20:07:15 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:27.011 20:07:15 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:27.011 20:07:15 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:27.012 20:07:15 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:27.012 20:07:15 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:27.012 20:07:15 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:27.012 20:07:15 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:27.012 20:07:15 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:27.012 20:07:15 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:27.012 20:07:15 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:27.012 20:07:15 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=0x1 00:07:27.012 20:07:15 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:27.012 20:07:15 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:27.012 20:07:15 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:27.012 20:07:15 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:27.012 20:07:15 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:27.012 20:07:15 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:27.012 20:07:15 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:27.012 20:07:15 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:27.012 20:07:15 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:27.012 20:07:15 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:27.012 20:07:15 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:27.012 20:07:15 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=decompress 00:07:27.012 20:07:15 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:27.012 20:07:15 accel.accel_decmop_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:27.012 20:07:15 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:27.012 20:07:15 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:27.012 20:07:15 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:27.012 20:07:15 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:27.012 20:07:15 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:27.012 20:07:15 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:27.012 20:07:15 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:27.012 20:07:15 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:27.012 20:07:15 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:27.012 20:07:15 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:27.012 20:07:15 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=software 00:07:27.012 20:07:15 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:27.012 20:07:15 accel.accel_decmop_full -- accel/accel.sh@22 -- # accel_module=software 00:07:27.012 20:07:15 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:27.012 20:07:15 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:27.012 20:07:15 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:27.012 20:07:15 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:27.012 20:07:15 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:27.012 20:07:15 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:27.012 20:07:15 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:07:27.012 20:07:15 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:27.012 20:07:15 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:27.012 20:07:15 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:27.012 20:07:15 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:07:27.012 20:07:15 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:27.012 20:07:15 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:27.012 20:07:15 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:27.012 20:07:15 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=1 00:07:27.012 20:07:15 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:27.012 20:07:15 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:27.012 20:07:15 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:27.012 20:07:15 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:27.012 20:07:15 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:27.012 20:07:15 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:27.012 20:07:15 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:27.012 20:07:15 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=Yes 00:07:27.012 20:07:15 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:27.012 20:07:15 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:27.012 20:07:15 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:27.012 20:07:15 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:27.012 20:07:15 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:27.012 20:07:15 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:27.012 20:07:15 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:27.012 20:07:15 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:27.012 20:07:15 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:27.012 20:07:15 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:27.012 20:07:15 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:27.949 20:07:17 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:27.949 20:07:17 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:27.949 20:07:17 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:27.949 20:07:17 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:27.949 20:07:17 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:27.949 20:07:17 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:27.949 20:07:17 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:27.949 20:07:17 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:27.949 20:07:17 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:27.949 20:07:17 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:27.949 20:07:17 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:27.949 20:07:17 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:27.949 20:07:17 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:27.949 20:07:17 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:27.949 20:07:17 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:27.949 20:07:17 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:27.949 20:07:17 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:27.949 20:07:17 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:27.949 20:07:17 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:27.949 20:07:17 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:27.949 20:07:17 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:27.949 20:07:17 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:27.949 20:07:17 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:27.949 20:07:17 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:27.949 ************************************ 00:07:27.949 END TEST accel_decmop_full 00:07:27.949 ************************************ 00:07:27.949 20:07:17 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:27.949 20:07:17 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:27.949 20:07:17 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:27.949 00:07:27.949 real 0m1.501s 00:07:27.949 user 0m1.291s 00:07:27.949 sys 0m0.116s 00:07:27.949 20:07:17 accel.accel_decmop_full -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:27.949 20:07:17 accel.accel_decmop_full -- common/autotest_common.sh@10 -- # set +x 00:07:28.207 20:07:17 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:28.207 20:07:17 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:28.207 20:07:17 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:28.207 20:07:17 accel -- common/autotest_common.sh@10 -- # set +x 00:07:28.207 ************************************ 00:07:28.207 START TEST accel_decomp_mcore 00:07:28.207 ************************************ 00:07:28.207 20:07:17 accel.accel_decomp_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:28.207 20:07:17 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:28.207 20:07:17 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:28.207 20:07:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.207 20:07:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.207 20:07:17 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:28.207 20:07:17 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:28.207 20:07:17 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:28.207 20:07:17 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:28.207 20:07:17 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:28.207 20:07:17 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:28.207 20:07:17 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:28.207 20:07:17 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:28.207 20:07:17 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:28.207 20:07:17 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:28.208 [2024-07-14 20:07:17.105175] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:28.208 [2024-07-14 20:07:17.105292] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76668 ] 00:07:28.208 [2024-07-14 20:07:17.246201] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:28.467 [2024-07-14 20:07:17.361903] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:28.467 [2024-07-14 20:07:17.362014] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:28.467 [2024-07-14 20:07:17.362961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:28.467 [2024-07-14 20:07:17.362994] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.467 20:07:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:28.467 20:07:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.467 20:07:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.467 20:07:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.467 20:07:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:28.467 20:07:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.467 20:07:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.467 20:07:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.467 20:07:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:28.467 20:07:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.467 20:07:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.467 20:07:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.467 20:07:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:28.467 20:07:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.467 20:07:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.467 20:07:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.467 20:07:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:28.467 20:07:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.467 20:07:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.467 20:07:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.467 20:07:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:28.467 20:07:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.467 20:07:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.467 20:07:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.467 20:07:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:28.467 20:07:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.467 20:07:17 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:28.467 20:07:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.467 20:07:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.467 20:07:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:28.467 20:07:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.467 20:07:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.467 20:07:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.467 20:07:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:28.467 20:07:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.467 20:07:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.467 20:07:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.467 20:07:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:07:28.467 20:07:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.467 20:07:17 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:28.467 20:07:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.467 20:07:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.467 20:07:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:28.467 20:07:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.467 20:07:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.467 20:07:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.467 20:07:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:28.467 20:07:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.467 20:07:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.467 20:07:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.467 20:07:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:28.467 20:07:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.467 20:07:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.467 20:07:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.467 20:07:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:28.467 20:07:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.467 20:07:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.467 20:07:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.467 20:07:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:28.467 20:07:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.467 20:07:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.467 20:07:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.467 20:07:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:28.467 20:07:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.467 20:07:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.467 20:07:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.467 20:07:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:28.467 20:07:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.467 20:07:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.467 20:07:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.467 20:07:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:28.467 20:07:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.467 20:07:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.467 20:07:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.843 20:07:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:29.844 20:07:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.844 20:07:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.844 20:07:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.844 20:07:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:29.844 20:07:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.844 20:07:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.844 20:07:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.844 20:07:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:29.844 20:07:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.844 20:07:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.844 20:07:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.844 20:07:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:29.844 20:07:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.844 20:07:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.844 20:07:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.844 20:07:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:29.844 20:07:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.844 20:07:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.844 20:07:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.844 20:07:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:29.844 20:07:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.844 20:07:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.844 20:07:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.844 20:07:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:29.844 20:07:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.844 20:07:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.844 20:07:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.844 20:07:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:29.844 20:07:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.844 20:07:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.844 20:07:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.844 20:07:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:29.844 20:07:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.844 20:07:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.844 20:07:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.844 20:07:18 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:29.844 20:07:18 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:29.844 20:07:18 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:29.844 00:07:29.844 real 0m1.534s 00:07:29.844 user 0m4.757s 00:07:29.844 sys 0m0.139s 00:07:29.844 20:07:18 accel.accel_decomp_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:29.844 20:07:18 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:29.844 ************************************ 00:07:29.844 END TEST accel_decomp_mcore 00:07:29.844 ************************************ 00:07:29.844 20:07:18 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:29.844 20:07:18 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:29.844 20:07:18 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:29.844 20:07:18 accel -- common/autotest_common.sh@10 -- # set +x 00:07:29.844 ************************************ 00:07:29.844 START TEST accel_decomp_full_mcore 00:07:29.844 ************************************ 00:07:29.844 20:07:18 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:29.844 20:07:18 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:29.844 20:07:18 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:29.844 20:07:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.844 20:07:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.844 20:07:18 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:29.844 20:07:18 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:29.844 20:07:18 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:29.844 20:07:18 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:29.844 20:07:18 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:29.844 20:07:18 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:29.844 20:07:18 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:29.844 20:07:18 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:29.844 20:07:18 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:29.844 20:07:18 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:29.844 [2024-07-14 20:07:18.692767] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:29.844 [2024-07-14 20:07:18.692906] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76711 ] 00:07:29.844 [2024-07-14 20:07:18.831681] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:30.103 [2024-07-14 20:07:18.941212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:30.103 [2024-07-14 20:07:18.941431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:30.103 [2024-07-14 20:07:18.942352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:30.103 [2024-07-14 20:07:18.942363] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.103 20:07:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:30.103 20:07:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.103 20:07:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.103 20:07:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.103 20:07:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:30.103 20:07:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.103 20:07:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.103 20:07:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.103 20:07:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:30.103 20:07:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.103 20:07:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.103 20:07:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.103 20:07:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:30.103 20:07:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.103 20:07:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.103 20:07:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.103 20:07:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:30.103 20:07:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.103 20:07:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.103 20:07:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.103 20:07:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:30.103 20:07:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.103 20:07:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.103 20:07:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.103 20:07:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:30.103 20:07:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.103 20:07:19 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:30.103 20:07:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.103 20:07:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.103 20:07:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:30.103 20:07:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.103 20:07:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.103 20:07:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.103 20:07:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:30.103 20:07:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.103 20:07:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.103 20:07:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.103 20:07:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:30.103 20:07:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.103 20:07:19 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:30.103 20:07:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.103 20:07:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.103 20:07:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:30.103 20:07:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.103 20:07:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.103 20:07:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.103 20:07:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:30.103 20:07:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.103 20:07:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.103 20:07:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.103 20:07:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:30.103 20:07:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.103 20:07:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.103 20:07:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.103 20:07:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:30.103 20:07:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.103 20:07:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.103 20:07:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.103 20:07:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:30.103 20:07:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.103 20:07:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.103 20:07:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.103 20:07:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:30.103 20:07:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.103 20:07:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.103 20:07:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.103 20:07:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:30.103 20:07:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.103 20:07:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.103 20:07:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.103 20:07:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:30.103 20:07:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.103 20:07:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.103 20:07:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:31.478 20:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:31.478 20:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:31.478 20:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:31.478 20:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:31.478 20:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:31.478 20:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:31.478 20:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:31.478 20:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:31.478 20:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:31.478 20:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:31.478 20:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:31.478 20:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:31.478 20:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:31.478 20:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:31.478 20:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:31.478 20:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:31.478 20:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:31.478 20:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:31.478 20:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:31.478 20:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:31.478 20:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:31.478 20:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:31.478 20:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:31.478 20:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:31.478 20:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:31.478 20:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:31.478 20:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:31.478 20:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:31.478 20:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:31.478 20:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:31.478 20:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:31.478 20:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:31.478 20:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:31.478 20:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:31.478 20:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:31.478 20:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:31.478 20:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:31.478 20:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:31.478 20:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:31.478 00:07:31.478 real 0m1.542s 00:07:31.478 user 0m4.786s 00:07:31.478 sys 0m0.147s 00:07:31.478 20:07:20 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:31.478 20:07:20 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:31.478 ************************************ 00:07:31.478 END TEST accel_decomp_full_mcore 00:07:31.478 ************************************ 00:07:31.479 20:07:20 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:31.479 20:07:20 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:31.479 20:07:20 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:31.479 20:07:20 accel -- common/autotest_common.sh@10 -- # set +x 00:07:31.479 ************************************ 00:07:31.479 START TEST accel_decomp_mthread 00:07:31.479 ************************************ 00:07:31.479 20:07:20 accel.accel_decomp_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:31.479 20:07:20 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:31.479 20:07:20 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:31.479 20:07:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.479 20:07:20 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:31.479 20:07:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.479 20:07:20 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:31.479 20:07:20 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:31.479 20:07:20 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:31.479 20:07:20 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:31.479 20:07:20 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:31.479 20:07:20 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:31.479 20:07:20 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:31.479 20:07:20 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:31.479 20:07:20 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:31.479 [2024-07-14 20:07:20.288688] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:31.479 [2024-07-14 20:07:20.288774] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76743 ] 00:07:31.479 [2024-07-14 20:07:20.423365] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.479 [2024-07-14 20:07:20.524863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.737 20:07:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:31.737 20:07:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.737 20:07:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.737 20:07:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.737 20:07:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:31.737 20:07:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.737 20:07:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.737 20:07:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.737 20:07:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:31.737 20:07:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.737 20:07:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.737 20:07:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.737 20:07:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:31.737 20:07:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.737 20:07:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.737 20:07:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.737 20:07:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:31.737 20:07:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.737 20:07:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.737 20:07:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.737 20:07:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:31.737 20:07:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.737 20:07:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.737 20:07:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.737 20:07:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:31.737 20:07:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.737 20:07:20 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:31.737 20:07:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.737 20:07:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.737 20:07:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:31.737 20:07:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.737 20:07:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.737 20:07:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.737 20:07:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:31.737 20:07:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.737 20:07:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.737 20:07:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.737 20:07:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:31.737 20:07:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.737 20:07:20 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:31.737 20:07:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.737 20:07:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.737 20:07:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:31.737 20:07:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.737 20:07:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.737 20:07:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.737 20:07:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:31.737 20:07:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.737 20:07:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.737 20:07:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.737 20:07:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:31.737 20:07:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.737 20:07:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.737 20:07:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.737 20:07:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:31.737 20:07:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.737 20:07:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.737 20:07:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.737 20:07:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:31.737 20:07:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.737 20:07:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.737 20:07:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.737 20:07:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:31.737 20:07:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.737 20:07:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.737 20:07:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.737 20:07:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:31.738 20:07:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.738 20:07:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.738 20:07:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.738 20:07:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:31.738 20:07:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.738 20:07:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.738 20:07:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:32.687 20:07:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:32.687 20:07:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:32.687 20:07:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:32.687 20:07:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:32.687 20:07:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:32.687 20:07:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:32.687 20:07:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:32.687 20:07:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:32.687 20:07:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:32.687 20:07:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:32.687 20:07:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:32.687 20:07:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:32.687 20:07:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:32.687 20:07:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:32.687 20:07:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:32.687 20:07:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:32.687 20:07:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:32.687 20:07:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:32.687 20:07:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:32.687 20:07:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:32.687 20:07:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:32.687 20:07:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:32.687 20:07:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:32.688 20:07:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:32.688 20:07:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:32.688 20:07:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:32.688 20:07:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:32.688 20:07:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:32.688 20:07:21 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:32.688 20:07:21 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:32.688 20:07:21 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:32.688 00:07:32.688 real 0m1.481s 00:07:32.688 user 0m1.260s 00:07:32.688 sys 0m0.127s 00:07:32.688 20:07:21 accel.accel_decomp_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:32.688 20:07:21 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:32.688 ************************************ 00:07:32.688 END TEST accel_decomp_mthread 00:07:32.688 ************************************ 00:07:32.945 20:07:21 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:32.945 20:07:21 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:32.945 20:07:21 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:32.945 20:07:21 accel -- common/autotest_common.sh@10 -- # set +x 00:07:32.945 ************************************ 00:07:32.945 START TEST accel_decomp_full_mthread 00:07:32.945 ************************************ 00:07:32.945 20:07:21 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:32.945 20:07:21 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:32.945 20:07:21 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:32.945 20:07:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:32.945 20:07:21 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:32.945 20:07:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:32.945 20:07:21 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:32.945 20:07:21 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:32.945 20:07:21 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:32.945 20:07:21 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:32.945 20:07:21 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:32.945 20:07:21 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:32.945 20:07:21 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:32.945 20:07:21 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:32.945 20:07:21 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:32.945 [2024-07-14 20:07:21.833250] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:32.945 [2024-07-14 20:07:21.833372] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76783 ] 00:07:32.945 [2024-07-14 20:07:21.970712] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.203 [2024-07-14 20:07:22.059583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.203 20:07:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:33.203 20:07:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.203 20:07:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.203 20:07:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.203 20:07:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:33.203 20:07:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.203 20:07:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.203 20:07:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.203 20:07:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:33.203 20:07:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.203 20:07:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.203 20:07:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.203 20:07:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:33.203 20:07:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.203 20:07:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.203 20:07:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.203 20:07:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:33.203 20:07:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.203 20:07:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.203 20:07:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.203 20:07:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:33.203 20:07:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.203 20:07:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.203 20:07:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.203 20:07:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:33.203 20:07:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.203 20:07:22 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:33.203 20:07:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.203 20:07:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.203 20:07:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:33.203 20:07:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.203 20:07:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.203 20:07:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.203 20:07:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:33.203 20:07:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.203 20:07:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.203 20:07:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.203 20:07:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:33.203 20:07:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.203 20:07:22 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:33.203 20:07:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.203 20:07:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.203 20:07:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:33.203 20:07:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.203 20:07:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.203 20:07:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.203 20:07:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:33.203 20:07:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.203 20:07:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.203 20:07:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.203 20:07:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:33.203 20:07:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.203 20:07:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.203 20:07:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.203 20:07:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:33.203 20:07:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.203 20:07:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.203 20:07:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.203 20:07:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:33.203 20:07:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.203 20:07:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.204 20:07:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.204 20:07:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:33.204 20:07:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.204 20:07:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.204 20:07:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.204 20:07:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:33.204 20:07:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.204 20:07:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.204 20:07:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.204 20:07:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:33.204 20:07:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.204 20:07:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.204 20:07:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:34.576 20:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:34.576 20:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:34.576 20:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:34.576 20:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:34.576 20:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:34.576 20:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:34.576 20:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:34.576 20:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:34.576 20:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:34.576 20:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:34.576 20:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:34.576 20:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:34.576 20:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:34.576 20:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:34.576 20:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:34.576 20:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:34.576 20:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:34.576 20:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:34.576 20:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:34.576 20:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:34.576 20:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:34.576 20:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:34.576 20:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:34.576 20:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:34.576 20:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:34.576 20:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:34.576 20:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:34.576 20:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:34.576 20:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:34.576 20:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:34.576 20:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:34.576 00:07:34.576 real 0m1.502s 00:07:34.576 user 0m1.295s 00:07:34.576 sys 0m0.117s 00:07:34.576 20:07:23 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:34.576 20:07:23 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:34.576 ************************************ 00:07:34.576 END TEST accel_decomp_full_mthread 00:07:34.576 ************************************ 00:07:34.576 20:07:23 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:34.576 20:07:23 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:34.576 20:07:23 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:34.576 20:07:23 accel -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:34.576 20:07:23 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:34.576 20:07:23 accel -- common/autotest_common.sh@10 -- # set +x 00:07:34.576 20:07:23 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:34.576 20:07:23 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:34.576 20:07:23 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:34.576 20:07:23 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:34.576 20:07:23 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:34.576 20:07:23 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:34.576 20:07:23 accel -- accel/accel.sh@41 -- # jq -r . 00:07:34.576 ************************************ 00:07:34.576 START TEST accel_dif_functional_tests 00:07:34.576 ************************************ 00:07:34.576 20:07:23 accel.accel_dif_functional_tests -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:34.576 [2024-07-14 20:07:23.420349] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:34.576 [2024-07-14 20:07:23.420476] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76813 ] 00:07:34.576 [2024-07-14 20:07:23.561093] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:34.576 [2024-07-14 20:07:23.658389] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:34.576 [2024-07-14 20:07:23.658538] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.835 [2024-07-14 20:07:23.658539] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:34.835 00:07:34.835 00:07:34.835 CUnit - A unit testing framework for C - Version 2.1-3 00:07:34.835 http://cunit.sourceforge.net/ 00:07:34.835 00:07:34.835 00:07:34.835 Suite: accel_dif 00:07:34.835 Test: verify: DIF generated, GUARD check ...passed 00:07:34.835 Test: verify: DIF generated, APPTAG check ...passed 00:07:34.835 Test: verify: DIF generated, REFTAG check ...passed 00:07:34.835 Test: verify: DIF not generated, GUARD check ...passed 00:07:34.835 Test: verify: DIF not generated, APPTAG check ...passed 00:07:34.835 Test: verify: DIF not generated, REFTAG check ...passed 00:07:34.835 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:34.835 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:07:34.835 Test: verify: APPTAG incorrect, no APPTAG check ...[2024-07-14 20:07:23.750187] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:34.835 [2024-07-14 20:07:23.750272] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:34.835 [2024-07-14 20:07:23.750306] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:34.835 [2024-07-14 20:07:23.750375] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:34.835 passed 00:07:34.835 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:34.835 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:34.835 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:07:34.835 Test: verify copy: DIF generated, GUARD check ...[2024-07-14 20:07:23.750515] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:34.835 passed 00:07:34.835 Test: verify copy: DIF generated, APPTAG check ...passed 00:07:34.835 Test: verify copy: DIF generated, REFTAG check ...passed 00:07:34.835 Test: verify copy: DIF not generated, GUARD check ...passed 00:07:34.835 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-14 20:07:23.750682] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:34.835 [2024-07-14 20:07:23.750717] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:34.835 passed 00:07:34.835 Test: verify copy: DIF not generated, REFTAG check ...passed 00:07:34.835 Test: generate copy: DIF generated, GUARD check ...passed 00:07:34.835 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:34.835 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:34.835 Test: generate copy: DIF generated, no GUARD check flag set ...[2024-07-14 20:07:23.750749] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:34.835 passed 00:07:34.835 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:34.835 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:34.835 Test: generate copy: iovecs-len validate ...passed 00:07:34.835 Test: generate copy: buffer alignment validate ...passed 00:07:34.835 00:07:34.835 Run Summary: Type Total Ran Passed Failed Inactive 00:07:34.835 suites 1 1 n/a 0 0 00:07:34.835 tests 26 26 26 0 0 00:07:34.835 asserts 115 115 115 0 n/a 00:07:34.835 00:07:34.835 Elapsed time = 0.002 seconds 00:07:34.835 [2024-07-14 20:07:23.751005] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:35.092 00:07:35.092 real 0m0.611s 00:07:35.092 user 0m0.803s 00:07:35.092 sys 0m0.154s 00:07:35.092 20:07:23 accel.accel_dif_functional_tests -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:35.092 20:07:23 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:35.092 ************************************ 00:07:35.092 END TEST accel_dif_functional_tests 00:07:35.092 ************************************ 00:07:35.092 00:07:35.092 real 0m34.051s 00:07:35.092 user 0m35.686s 00:07:35.092 sys 0m4.141s 00:07:35.092 20:07:24 accel -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:35.092 20:07:24 accel -- common/autotest_common.sh@10 -- # set +x 00:07:35.092 ************************************ 00:07:35.092 END TEST accel 00:07:35.092 ************************************ 00:07:35.092 20:07:24 -- spdk/autotest.sh@184 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:35.093 20:07:24 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:35.093 20:07:24 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:35.093 20:07:24 -- common/autotest_common.sh@10 -- # set +x 00:07:35.093 ************************************ 00:07:35.093 START TEST accel_rpc 00:07:35.093 ************************************ 00:07:35.093 20:07:24 accel_rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:35.093 * Looking for test storage... 00:07:35.093 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:35.093 20:07:24 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:35.093 20:07:24 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=76883 00:07:35.093 20:07:24 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 76883 00:07:35.093 20:07:24 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:35.093 20:07:24 accel_rpc -- common/autotest_common.sh@827 -- # '[' -z 76883 ']' 00:07:35.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.093 20:07:24 accel_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.093 20:07:24 accel_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:35.093 20:07:24 accel_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.093 20:07:24 accel_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:35.093 20:07:24 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:35.350 [2024-07-14 20:07:24.219657] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:35.350 [2024-07-14 20:07:24.219771] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76883 ] 00:07:35.350 [2024-07-14 20:07:24.358193] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.609 [2024-07-14 20:07:24.458580] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.175 20:07:25 accel_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:36.175 20:07:25 accel_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:36.175 20:07:25 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:36.175 20:07:25 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:36.175 20:07:25 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:36.175 20:07:25 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:36.175 20:07:25 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:36.175 20:07:25 accel_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:36.175 20:07:25 accel_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:36.175 20:07:25 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.175 ************************************ 00:07:36.175 START TEST accel_assign_opcode 00:07:36.175 ************************************ 00:07:36.175 20:07:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1121 -- # accel_assign_opcode_test_suite 00:07:36.175 20:07:25 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:36.175 20:07:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.175 20:07:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:36.175 [2024-07-14 20:07:25.247196] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:36.175 20:07:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.175 20:07:25 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:36.175 20:07:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.175 20:07:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:36.175 [2024-07-14 20:07:25.255177] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:36.434 20:07:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.434 20:07:25 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:36.434 20:07:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.434 20:07:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:36.434 20:07:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.434 20:07:25 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:36.434 20:07:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.434 20:07:25 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:36.434 20:07:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:36.434 20:07:25 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:36.434 20:07:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.692 software 00:07:36.692 00:07:36.692 real 0m0.299s 00:07:36.692 user 0m0.056s 00:07:36.692 sys 0m0.008s 00:07:36.692 20:07:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:36.692 20:07:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:36.692 ************************************ 00:07:36.692 END TEST accel_assign_opcode 00:07:36.692 ************************************ 00:07:36.692 20:07:25 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 76883 00:07:36.692 20:07:25 accel_rpc -- common/autotest_common.sh@946 -- # '[' -z 76883 ']' 00:07:36.692 20:07:25 accel_rpc -- common/autotest_common.sh@950 -- # kill -0 76883 00:07:36.692 20:07:25 accel_rpc -- common/autotest_common.sh@951 -- # uname 00:07:36.692 20:07:25 accel_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:36.692 20:07:25 accel_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 76883 00:07:36.692 20:07:25 accel_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:36.692 killing process with pid 76883 00:07:36.692 20:07:25 accel_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:36.692 20:07:25 accel_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 76883' 00:07:36.692 20:07:25 accel_rpc -- common/autotest_common.sh@965 -- # kill 76883 00:07:36.692 20:07:25 accel_rpc -- common/autotest_common.sh@970 -- # wait 76883 00:07:36.956 00:07:36.956 real 0m1.908s 00:07:36.956 user 0m2.036s 00:07:36.956 sys 0m0.449s 00:07:36.956 20:07:25 accel_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:36.956 20:07:25 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.956 ************************************ 00:07:36.956 END TEST accel_rpc 00:07:36.956 ************************************ 00:07:36.956 20:07:26 -- spdk/autotest.sh@185 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:36.956 20:07:26 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:36.956 20:07:26 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:36.956 20:07:26 -- common/autotest_common.sh@10 -- # set +x 00:07:36.956 ************************************ 00:07:36.956 START TEST app_cmdline 00:07:36.956 ************************************ 00:07:36.956 20:07:26 app_cmdline -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:37.213 * Looking for test storage... 00:07:37.213 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:37.213 20:07:26 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:37.213 20:07:26 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=76994 00:07:37.213 20:07:26 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 76994 00:07:37.213 20:07:26 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:37.213 20:07:26 app_cmdline -- common/autotest_common.sh@827 -- # '[' -z 76994 ']' 00:07:37.213 20:07:26 app_cmdline -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:37.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:37.213 20:07:26 app_cmdline -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:37.213 20:07:26 app_cmdline -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:37.213 20:07:26 app_cmdline -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:37.213 20:07:26 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:37.213 [2024-07-14 20:07:26.174952] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:37.213 [2024-07-14 20:07:26.175077] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76994 ] 00:07:37.470 [2024-07-14 20:07:26.310380] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.470 [2024-07-14 20:07:26.397838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.402 20:07:27 app_cmdline -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:38.402 20:07:27 app_cmdline -- common/autotest_common.sh@860 -- # return 0 00:07:38.402 20:07:27 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:38.402 { 00:07:38.402 "fields": { 00:07:38.402 "commit": "5fa2f5086", 00:07:38.402 "major": 24, 00:07:38.402 "minor": 5, 00:07:38.402 "patch": 1, 00:07:38.402 "suffix": "-pre" 00:07:38.402 }, 00:07:38.402 "version": "SPDK v24.05.1-pre git sha1 5fa2f5086" 00:07:38.402 } 00:07:38.402 20:07:27 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:38.402 20:07:27 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:38.402 20:07:27 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:38.402 20:07:27 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:38.402 20:07:27 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:38.402 20:07:27 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:38.402 20:07:27 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.403 20:07:27 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:38.403 20:07:27 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:38.403 20:07:27 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.403 20:07:27 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:38.403 20:07:27 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:38.403 20:07:27 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:38.403 20:07:27 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:07:38.403 20:07:27 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:38.403 20:07:27 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:38.403 20:07:27 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:38.403 20:07:27 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:38.403 20:07:27 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:38.403 20:07:27 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:38.403 20:07:27 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:38.403 20:07:27 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:38.403 20:07:27 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:38.403 20:07:27 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:38.661 2024/07/14 20:07:27 error on JSON-RPC call, method: env_dpdk_get_mem_stats, params: map[], err: error received for env_dpdk_get_mem_stats method, err: Code=-32601 Msg=Method not found 00:07:38.661 request: 00:07:38.661 { 00:07:38.661 "method": "env_dpdk_get_mem_stats", 00:07:38.661 "params": {} 00:07:38.661 } 00:07:38.661 Got JSON-RPC error response 00:07:38.661 GoRPCClient: error on JSON-RPC call 00:07:38.661 20:07:27 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:07:38.661 20:07:27 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:38.661 20:07:27 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:38.661 20:07:27 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:38.661 20:07:27 app_cmdline -- app/cmdline.sh@1 -- # killprocess 76994 00:07:38.661 20:07:27 app_cmdline -- common/autotest_common.sh@946 -- # '[' -z 76994 ']' 00:07:38.661 20:07:27 app_cmdline -- common/autotest_common.sh@950 -- # kill -0 76994 00:07:38.661 20:07:27 app_cmdline -- common/autotest_common.sh@951 -- # uname 00:07:38.661 20:07:27 app_cmdline -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:38.661 20:07:27 app_cmdline -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 76994 00:07:38.919 20:07:27 app_cmdline -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:38.919 killing process with pid 76994 00:07:38.919 20:07:27 app_cmdline -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:38.919 20:07:27 app_cmdline -- common/autotest_common.sh@964 -- # echo 'killing process with pid 76994' 00:07:38.919 20:07:27 app_cmdline -- common/autotest_common.sh@965 -- # kill 76994 00:07:38.919 20:07:27 app_cmdline -- common/autotest_common.sh@970 -- # wait 76994 00:07:39.177 00:07:39.177 real 0m2.119s 00:07:39.177 user 0m2.653s 00:07:39.177 sys 0m0.488s 00:07:39.177 20:07:28 app_cmdline -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:39.177 20:07:28 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:39.177 ************************************ 00:07:39.177 END TEST app_cmdline 00:07:39.177 ************************************ 00:07:39.177 20:07:28 -- spdk/autotest.sh@186 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:39.177 20:07:28 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:39.177 20:07:28 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:39.177 20:07:28 -- common/autotest_common.sh@10 -- # set +x 00:07:39.177 ************************************ 00:07:39.177 START TEST version 00:07:39.177 ************************************ 00:07:39.177 20:07:28 version -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:39.436 * Looking for test storage... 00:07:39.436 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:39.436 20:07:28 version -- app/version.sh@17 -- # get_header_version major 00:07:39.436 20:07:28 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:39.436 20:07:28 version -- app/version.sh@14 -- # tr -d '"' 00:07:39.436 20:07:28 version -- app/version.sh@14 -- # cut -f2 00:07:39.436 20:07:28 version -- app/version.sh@17 -- # major=24 00:07:39.436 20:07:28 version -- app/version.sh@18 -- # get_header_version minor 00:07:39.436 20:07:28 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:39.436 20:07:28 version -- app/version.sh@14 -- # cut -f2 00:07:39.436 20:07:28 version -- app/version.sh@14 -- # tr -d '"' 00:07:39.436 20:07:28 version -- app/version.sh@18 -- # minor=5 00:07:39.436 20:07:28 version -- app/version.sh@19 -- # get_header_version patch 00:07:39.436 20:07:28 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:39.436 20:07:28 version -- app/version.sh@14 -- # tr -d '"' 00:07:39.436 20:07:28 version -- app/version.sh@14 -- # cut -f2 00:07:39.436 20:07:28 version -- app/version.sh@19 -- # patch=1 00:07:39.436 20:07:28 version -- app/version.sh@20 -- # get_header_version suffix 00:07:39.436 20:07:28 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:39.436 20:07:28 version -- app/version.sh@14 -- # cut -f2 00:07:39.436 20:07:28 version -- app/version.sh@14 -- # tr -d '"' 00:07:39.436 20:07:28 version -- app/version.sh@20 -- # suffix=-pre 00:07:39.436 20:07:28 version -- app/version.sh@22 -- # version=24.5 00:07:39.436 20:07:28 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:39.436 20:07:28 version -- app/version.sh@25 -- # version=24.5.1 00:07:39.436 20:07:28 version -- app/version.sh@28 -- # version=24.5.1rc0 00:07:39.436 20:07:28 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:39.436 20:07:28 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:39.436 20:07:28 version -- app/version.sh@30 -- # py_version=24.5.1rc0 00:07:39.436 20:07:28 version -- app/version.sh@31 -- # [[ 24.5.1rc0 == \2\4\.\5\.\1\r\c\0 ]] 00:07:39.436 00:07:39.436 real 0m0.155s 00:07:39.436 user 0m0.091s 00:07:39.436 sys 0m0.094s 00:07:39.436 20:07:28 version -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:39.436 20:07:28 version -- common/autotest_common.sh@10 -- # set +x 00:07:39.436 ************************************ 00:07:39.436 END TEST version 00:07:39.436 ************************************ 00:07:39.436 20:07:28 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:07:39.436 20:07:28 -- spdk/autotest.sh@198 -- # uname -s 00:07:39.436 20:07:28 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:07:39.436 20:07:28 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:39.436 20:07:28 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:39.436 20:07:28 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:07:39.436 20:07:28 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:39.436 20:07:28 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:39.436 20:07:28 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:39.436 20:07:28 -- common/autotest_common.sh@10 -- # set +x 00:07:39.436 20:07:28 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:39.436 20:07:28 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:39.436 20:07:28 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:07:39.436 20:07:28 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:07:39.436 20:07:28 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:07:39.436 20:07:28 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:07:39.436 20:07:28 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:39.436 20:07:28 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:39.436 20:07:28 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:39.436 20:07:28 -- common/autotest_common.sh@10 -- # set +x 00:07:39.436 ************************************ 00:07:39.436 START TEST nvmf_tcp 00:07:39.436 ************************************ 00:07:39.436 20:07:28 nvmf_tcp -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:39.695 * Looking for test storage... 00:07:39.695 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:07:39.695 20:07:28 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:39.695 20:07:28 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:39.695 20:07:28 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:39.695 20:07:28 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:07:39.695 20:07:28 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:39.695 20:07:28 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:39.695 20:07:28 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:39.695 20:07:28 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:39.695 20:07:28 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:39.695 20:07:28 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:39.695 20:07:28 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:39.695 20:07:28 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:39.695 20:07:28 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:39.695 20:07:28 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:39.695 20:07:28 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:07:39.695 20:07:28 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:07:39.695 20:07:28 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:39.695 20:07:28 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:39.695 20:07:28 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:39.695 20:07:28 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:39.695 20:07:28 nvmf_tcp -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:39.695 20:07:28 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:39.695 20:07:28 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:39.695 20:07:28 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:39.695 20:07:28 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.695 20:07:28 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.695 20:07:28 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.695 20:07:28 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:07:39.695 20:07:28 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.695 20:07:28 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:07:39.695 20:07:28 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:39.695 20:07:28 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:39.695 20:07:28 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:39.695 20:07:28 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:39.695 20:07:28 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:39.695 20:07:28 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:39.695 20:07:28 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:39.696 20:07:28 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:39.696 20:07:28 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:39.696 20:07:28 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:39.696 20:07:28 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:39.696 20:07:28 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:39.696 20:07:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:39.696 20:07:28 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:39.696 20:07:28 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:39.696 20:07:28 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:39.696 20:07:28 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:39.696 20:07:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:39.696 ************************************ 00:07:39.696 START TEST nvmf_example 00:07:39.696 ************************************ 00:07:39.696 20:07:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:39.696 * Looking for test storage... 00:07:39.696 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:39.696 20:07:28 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:39.696 20:07:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:07:39.696 20:07:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:39.696 20:07:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:39.696 20:07:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:39.696 20:07:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:39.696 20:07:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:39.696 20:07:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:39.696 20:07:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:39.696 20:07:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:39.696 20:07:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:39.696 20:07:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:39.696 20:07:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:07:39.696 20:07:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:07:39.696 20:07:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:39.696 20:07:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:39.696 20:07:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:39.696 20:07:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:39.696 20:07:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:39.696 20:07:28 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:39.696 20:07:28 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:39.696 20:07:28 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:39.696 20:07:28 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.696 20:07:28 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.696 20:07:28 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.696 20:07:28 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:07:39.696 20:07:28 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.696 20:07:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:07:39.696 20:07:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:39.696 20:07:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:39.696 20:07:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:39.696 20:07:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:39.696 20:07:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:39.696 20:07:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:39.696 20:07:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:39.696 20:07:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:39.696 20:07:28 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:39.696 20:07:28 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:39.696 20:07:28 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:39.696 20:07:28 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:39.696 20:07:28 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:39.696 20:07:28 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:39.696 20:07:28 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:39.696 20:07:28 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:39.696 20:07:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:39.696 20:07:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:39.696 20:07:28 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:39.696 20:07:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:39.696 20:07:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:39.696 20:07:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:39.696 20:07:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:39.696 20:07:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:39.696 20:07:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:39.696 20:07:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:39.696 20:07:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:39.696 20:07:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:39.696 20:07:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:39.696 20:07:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:39.696 20:07:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:39.696 20:07:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:39.696 20:07:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:39.696 20:07:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:39.696 20:07:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:39.696 20:07:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:39.696 20:07:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:39.696 20:07:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:39.696 20:07:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:39.696 20:07:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:39.696 20:07:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:39.696 20:07:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:39.696 20:07:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:39.696 20:07:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:39.696 20:07:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:39.696 20:07:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:39.696 Cannot find device "nvmf_init_br" 00:07:39.696 20:07:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@154 -- # true 00:07:39.696 20:07:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:39.696 Cannot find device "nvmf_tgt_br" 00:07:39.696 20:07:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@155 -- # true 00:07:39.696 20:07:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:39.696 Cannot find device "nvmf_tgt_br2" 00:07:39.696 20:07:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@156 -- # true 00:07:39.696 20:07:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:39.696 Cannot find device "nvmf_init_br" 00:07:39.696 20:07:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@157 -- # true 00:07:39.696 20:07:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:39.696 Cannot find device "nvmf_tgt_br" 00:07:39.696 20:07:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@158 -- # true 00:07:39.696 20:07:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:39.696 Cannot find device "nvmf_tgt_br2" 00:07:39.696 20:07:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@159 -- # true 00:07:39.696 20:07:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:39.696 Cannot find device "nvmf_br" 00:07:39.696 20:07:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@160 -- # true 00:07:39.696 20:07:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:39.955 Cannot find device "nvmf_init_if" 00:07:39.955 20:07:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@161 -- # true 00:07:39.955 20:07:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:39.955 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:39.955 20:07:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@162 -- # true 00:07:39.955 20:07:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:39.955 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:39.955 20:07:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@163 -- # true 00:07:39.955 20:07:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:39.955 20:07:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:39.955 20:07:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:39.955 20:07:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:39.955 20:07:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:39.955 20:07:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:39.955 20:07:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:39.955 20:07:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:39.955 20:07:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:39.955 20:07:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:39.955 20:07:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:39.955 20:07:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:39.955 20:07:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:39.955 20:07:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:39.955 20:07:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:39.955 20:07:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:39.955 20:07:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:39.956 20:07:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:39.956 20:07:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:39.956 20:07:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:39.956 20:07:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:40.214 20:07:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:40.215 20:07:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:40.215 20:07:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:40.215 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:40.215 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.113 ms 00:07:40.215 00:07:40.215 --- 10.0.0.2 ping statistics --- 00:07:40.215 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:40.215 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:07:40.215 20:07:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:40.215 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:40.215 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.100 ms 00:07:40.215 00:07:40.215 --- 10.0.0.3 ping statistics --- 00:07:40.215 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:40.215 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:07:40.215 20:07:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:40.215 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:40.215 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:07:40.215 00:07:40.215 --- 10.0.0.1 ping statistics --- 00:07:40.215 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:40.215 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:07:40.215 20:07:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:40.215 20:07:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@433 -- # return 0 00:07:40.215 20:07:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:40.215 20:07:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:40.215 20:07:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:40.215 20:07:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:40.215 20:07:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:40.215 20:07:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:40.215 20:07:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:40.215 20:07:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:40.215 20:07:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:40.215 20:07:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:40.215 20:07:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:40.215 20:07:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:40.215 20:07:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:40.215 20:07:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:40.215 20:07:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=77342 00:07:40.215 20:07:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:40.215 20:07:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 77342 00:07:40.215 20:07:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@827 -- # '[' -z 77342 ']' 00:07:40.215 20:07:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:40.215 20:07:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:40.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:40.215 20:07:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:40.215 20:07:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:40.215 20:07:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:41.151 20:07:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:41.151 20:07:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@860 -- # return 0 00:07:41.151 20:07:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:41.151 20:07:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:41.151 20:07:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:41.151 20:07:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:41.151 20:07:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.151 20:07:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:41.411 20:07:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.411 20:07:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:41.411 20:07:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.411 20:07:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:41.411 20:07:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.411 20:07:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:41.411 20:07:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:41.411 20:07:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.411 20:07:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:41.411 20:07:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.411 20:07:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:41.411 20:07:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:41.411 20:07:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.411 20:07:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:41.411 20:07:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.411 20:07:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:41.411 20:07:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.411 20:07:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:41.411 20:07:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.411 20:07:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:07:41.411 20:07:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:53.646 Initializing NVMe Controllers 00:07:53.646 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:53.646 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:53.646 Initialization complete. Launching workers. 00:07:53.646 ======================================================== 00:07:53.646 Latency(us) 00:07:53.646 Device Information : IOPS MiB/s Average min max 00:07:53.646 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14474.71 56.54 4421.65 849.47 20236.90 00:07:53.646 ======================================================== 00:07:53.646 Total : 14474.71 56.54 4421.65 849.47 20236.90 00:07:53.646 00:07:53.646 20:07:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:53.646 20:07:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:53.646 20:07:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:53.646 20:07:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:07:53.646 20:07:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:53.647 20:07:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:07:53.647 20:07:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:53.647 20:07:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:53.647 rmmod nvme_tcp 00:07:53.647 rmmod nvme_fabrics 00:07:53.647 rmmod nvme_keyring 00:07:53.647 20:07:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:53.647 20:07:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:07:53.647 20:07:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:07:53.647 20:07:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 77342 ']' 00:07:53.647 20:07:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 77342 00:07:53.647 20:07:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@946 -- # '[' -z 77342 ']' 00:07:53.647 20:07:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@950 -- # kill -0 77342 00:07:53.647 20:07:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # uname 00:07:53.647 20:07:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:53.647 20:07:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 77342 00:07:53.647 20:07:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # process_name=nvmf 00:07:53.647 20:07:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@956 -- # '[' nvmf = sudo ']' 00:07:53.647 killing process with pid 77342 00:07:53.647 20:07:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@964 -- # echo 'killing process with pid 77342' 00:07:53.647 20:07:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@965 -- # kill 77342 00:07:53.647 20:07:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@970 -- # wait 77342 00:07:53.647 nvmf threads initialize successfully 00:07:53.647 bdev subsystem init successfully 00:07:53.647 created a nvmf target service 00:07:53.647 create targets's poll groups done 00:07:53.647 all subsystems of target started 00:07:53.647 nvmf target is running 00:07:53.647 all subsystems of target stopped 00:07:53.647 destroy targets's poll groups done 00:07:53.647 destroyed the nvmf target service 00:07:53.647 bdev subsystem finish successfully 00:07:53.647 nvmf threads destroy successfully 00:07:53.647 20:07:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:53.647 20:07:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:53.647 20:07:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:53.647 20:07:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:53.647 20:07:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:53.647 20:07:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:53.647 20:07:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:53.647 20:07:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:53.647 20:07:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:53.647 20:07:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:53.647 20:07:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:53.647 20:07:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:53.647 00:07:53.647 real 0m12.409s 00:07:53.647 user 0m44.607s 00:07:53.647 sys 0m2.023s 00:07:53.647 20:07:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:53.647 20:07:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:53.647 ************************************ 00:07:53.647 END TEST nvmf_example 00:07:53.647 ************************************ 00:07:53.647 20:07:41 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:53.647 20:07:41 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:53.647 20:07:41 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:53.647 20:07:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:53.647 ************************************ 00:07:53.647 START TEST nvmf_filesystem 00:07:53.647 ************************************ 00:07:53.647 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:53.647 * Looking for test storage... 00:07:53.647 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:53.647 20:07:41 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:07:53.647 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:53.647 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:07:53.647 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:53.647 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:53.647 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@38 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:07:53.647 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@43 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:07:53.647 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:07:53.647 20:07:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:53.647 20:07:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:53.647 20:07:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:53.647 20:07:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:53.647 20:07:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:07:53.647 20:07:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:53.647 20:07:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:53.647 20:07:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:53.647 20:07:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:53.647 20:07:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:53.647 20:07:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:53.647 20:07:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:53.647 20:07:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:53.647 20:07:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:53.647 20:07:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:53.647 20:07:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:53.647 20:07:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:53.647 20:07:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:53.647 20:07:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:07:53.647 20:07:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:53.647 20:07:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:53.647 20:07:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:53.647 20:07:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:53.647 20:07:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:53.647 20:07:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:53.647 20:07:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:53.647 20:07:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:53.647 20:07:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:53.647 20:07:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:53.647 20:07:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:53.647 20:07:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:53.647 20:07:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:53.647 20:07:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:53.647 20:07:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:53.647 20:07:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:53.647 20:07:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:07:53.647 20:07:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:53.647 20:07:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:53.647 20:07:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:53.647 20:07:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:53.647 20:07:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:07:53.647 20:07:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:53.647 20:07:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:53.647 20:07:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:53.647 20:07:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:53.647 20:07:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:53.647 20:07:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:53.647 20:07:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:53.647 20:07:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:53.647 20:07:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:53.647 20:07:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:53.647 20:07:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:07:53.647 20:07:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:53.647 20:07:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:53.647 20:07:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:53.647 20:07:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:53.647 20:07:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:53.647 20:07:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:53.647 20:07:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:53.647 20:07:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=y 00:07:53.647 20:07:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:53.647 20:07:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:07:53.647 20:07:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:07:53.647 20:07:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:53.647 20:07:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:53.647 20:07:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:07:53.647 20:07:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:07:53.647 20:07:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:53.648 20:07:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:53.648 20:07:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:53.648 20:07:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=y 00:07:53.648 20:07:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:53.648 20:07:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:53.648 20:07:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:53.648 20:07:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:53.648 20:07:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:53.648 20:07:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:07:53.648 20:07:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:53.648 20:07:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:53.648 20:07:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:53.648 20:07:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:53.648 20:07:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:53.648 20:07:41 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:53.648 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@53 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:07:53.648 20:07:41 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:07:53.648 20:07:41 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:07:53.648 20:07:41 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:07:53.648 20:07:41 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:07:53.648 20:07:41 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:07:53.648 20:07:41 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:07:53.648 20:07:41 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:07:53.648 20:07:41 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:53.648 20:07:41 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:53.648 20:07:41 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:53.648 20:07:41 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:53.648 20:07:41 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:53.648 20:07:41 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:53.648 20:07:41 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:07:53.648 20:07:41 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:53.648 #define SPDK_CONFIG_H 00:07:53.648 #define SPDK_CONFIG_APPS 1 00:07:53.648 #define SPDK_CONFIG_ARCH native 00:07:53.648 #undef SPDK_CONFIG_ASAN 00:07:53.648 #define SPDK_CONFIG_AVAHI 1 00:07:53.648 #undef SPDK_CONFIG_CET 00:07:53.648 #define SPDK_CONFIG_COVERAGE 1 00:07:53.648 #define SPDK_CONFIG_CROSS_PREFIX 00:07:53.648 #undef SPDK_CONFIG_CRYPTO 00:07:53.648 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:53.648 #undef SPDK_CONFIG_CUSTOMOCF 00:07:53.648 #undef SPDK_CONFIG_DAOS 00:07:53.648 #define SPDK_CONFIG_DAOS_DIR 00:07:53.648 #define SPDK_CONFIG_DEBUG 1 00:07:53.648 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:53.648 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build 00:07:53.648 #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include 00:07:53.648 #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib 00:07:53.648 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:53.648 #undef SPDK_CONFIG_DPDK_UADK 00:07:53.648 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:07:53.648 #define SPDK_CONFIG_EXAMPLES 1 00:07:53.648 #undef SPDK_CONFIG_FC 00:07:53.648 #define SPDK_CONFIG_FC_PATH 00:07:53.648 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:53.648 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:53.648 #undef SPDK_CONFIG_FUSE 00:07:53.648 #undef SPDK_CONFIG_FUZZER 00:07:53.648 #define SPDK_CONFIG_FUZZER_LIB 00:07:53.648 #define SPDK_CONFIG_GOLANG 1 00:07:53.648 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:53.648 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:53.648 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:53.648 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:53.648 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:53.648 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:53.648 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:53.648 #define SPDK_CONFIG_IDXD 1 00:07:53.648 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:53.648 #undef SPDK_CONFIG_IPSEC_MB 00:07:53.648 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:53.648 #define SPDK_CONFIG_ISAL 1 00:07:53.648 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:53.648 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:53.648 #define SPDK_CONFIG_LIBDIR 00:07:53.648 #undef SPDK_CONFIG_LTO 00:07:53.648 #define SPDK_CONFIG_MAX_LCORES 00:07:53.648 #define SPDK_CONFIG_NVME_CUSE 1 00:07:53.648 #undef SPDK_CONFIG_OCF 00:07:53.648 #define SPDK_CONFIG_OCF_PATH 00:07:53.648 #define SPDK_CONFIG_OPENSSL_PATH 00:07:53.648 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:53.648 #define SPDK_CONFIG_PGO_DIR 00:07:53.648 #undef SPDK_CONFIG_PGO_USE 00:07:53.648 #define SPDK_CONFIG_PREFIX /usr/local 00:07:53.648 #undef SPDK_CONFIG_RAID5F 00:07:53.648 #undef SPDK_CONFIG_RBD 00:07:53.648 #define SPDK_CONFIG_RDMA 1 00:07:53.648 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:53.648 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:53.648 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:53.648 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:53.648 #define SPDK_CONFIG_SHARED 1 00:07:53.648 #undef SPDK_CONFIG_SMA 00:07:53.648 #define SPDK_CONFIG_TESTS 1 00:07:53.648 #undef SPDK_CONFIG_TSAN 00:07:53.648 #define SPDK_CONFIG_UBLK 1 00:07:53.648 #define SPDK_CONFIG_UBSAN 1 00:07:53.648 #undef SPDK_CONFIG_UNIT_TESTS 00:07:53.648 #undef SPDK_CONFIG_URING 00:07:53.648 #define SPDK_CONFIG_URING_PATH 00:07:53.648 #undef SPDK_CONFIG_URING_ZNS 00:07:53.648 #define SPDK_CONFIG_USDT 1 00:07:53.648 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:53.648 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:53.648 #undef SPDK_CONFIG_VFIO_USER 00:07:53.648 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:53.648 #define SPDK_CONFIG_VHOST 1 00:07:53.648 #define SPDK_CONFIG_VIRTIO 1 00:07:53.648 #undef SPDK_CONFIG_VTUNE 00:07:53.648 #define SPDK_CONFIG_VTUNE_DIR 00:07:53.648 #define SPDK_CONFIG_WERROR 1 00:07:53.648 #define SPDK_CONFIG_WPDK_DIR 00:07:53.648 #undef SPDK_CONFIG_XNVME 00:07:53.648 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:53.648 20:07:41 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:53.648 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:53.648 20:07:41 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:53.648 20:07:41 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:53.648 20:07:41 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:53.648 20:07:41 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.648 20:07:41 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.648 20:07:41 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.648 20:07:41 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:53.648 20:07:41 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.648 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:07:53.648 20:07:41 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:07:53.648 20:07:41 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:07:53.648 20:07:41 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:07:53.648 20:07:41 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:07:53.648 20:07:41 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:07:53.648 20:07:41 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:07:53.648 20:07:41 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:07:53.648 20:07:41 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@57 -- # : 1 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@61 -- # : 0 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # : 0 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # : 1 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # : 0 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # : 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # : 0 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # : 0 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # : 0 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # : 0 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # : 0 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # : 0 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # : 0 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # : 0 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # : 0 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # : 0 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # : 1 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # : 0 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # : 0 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # : 0 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # : 0 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # : tcp 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # : 0 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # : 0 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # : 0 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # : 0 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # : 0 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # : 0 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # : 0 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # : 0 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # : 0 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # : 1 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # : /home/vagrant/spdk_repo/dpdk/build 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # : 0 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # : 0 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # : 0 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # : 0 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # : 0 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # : 0 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # : v23.11 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # : true 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # : 0 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # : 0 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # : 1 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # : 0 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # : 0 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # : 0 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # : 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # : 0 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # : 0 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # : 0 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # : 0 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # : 0 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # : 1 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # : 1 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:07:53.649 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@184 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # cat 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@235 -- # echo leak:libfuse3.so 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@237 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@237 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@239 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@239 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@241 -- # '[' -z /var/spdk/dependencies ']' 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEPENDENCY_DIR 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@248 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@248 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@252 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@252 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@255 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@255 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@258 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@258 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@261 -- # '[' 0 -eq 0 ']' 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # export valgrind= 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # valgrind= 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@268 -- # uname -s 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@268 -- # '[' Linux = Linux ']' 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # HUGEMEM=4096 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # export CLEAR_HUGE=yes 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # CLEAR_HUGE=yes 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@278 -- # MAKE=make 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKEFLAGS=-j10 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@295 -- # export HUGEMEM=4096 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@295 -- # HUGEMEM=4096 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@297 -- # NO_HUGE=() 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # TEST_MODE= 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # for i in "$@" 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # case "$i" in 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@305 -- # TEST_TRANSPORT=tcp 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@317 -- # [[ -z 77595 ]] 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@317 -- # kill -0 77595 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # [[ -v testdir ]] 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@329 -- # local requested_size=2147483648 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local mount target_dir 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@332 -- # local -A mounts fss sizes avails uses 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local source fs size avail mount use 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@335 -- # local storage_fallback storage_candidates 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@337 -- # mktemp -udt spdk.XXXXXX 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@337 -- # storage_fallback=/tmp/spdk.vijval 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@342 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@344 -- # [[ -n '' ]] 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@349 -- # [[ -n '' ]] 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@354 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.vijval/tests/target /tmp/spdk.vijval 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@357 -- # requested_size=2214592512 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@326 -- # df -T 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@326 -- # grep -v Filesystem 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=devtmpfs 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=devtmpfs 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=4194304 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=4194304 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=0 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=6264512512 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=6267887616 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=3375104 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=2494353408 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=2507157504 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=12804096 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/vda5 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=btrfs 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=12524978176 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=20314062848 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=5985730560 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/vda5 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=btrfs 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=12524978176 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=20314062848 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=5985730560 00:07:53.650 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:53.651 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:53.651 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:53.651 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=6267752448 00:07:53.651 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=6267891712 00:07:53.651 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=139264 00:07:53.651 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:53.651 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/vda2 00:07:53.651 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=ext4 00:07:53.651 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=843546624 00:07:53.651 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=1012768768 00:07:53.651 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=100016128 00:07:53.651 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:53.651 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/vda3 00:07:53.651 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=vfat 00:07:53.651 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=92499968 00:07:53.651 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=104607744 00:07:53.651 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=12107776 00:07:53.651 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:53.651 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:53.651 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:53.651 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=1253572608 00:07:53.651 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=1253576704 00:07:53.651 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4096 00:07:53.651 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:53.651 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt/output 00:07:53.651 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=fuse.sshfs 00:07:53.651 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=95478349824 00:07:53.651 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=105088212992 00:07:53.651 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4224430080 00:07:53.651 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:53.651 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@365 -- # printf '* Looking for test storage...\n' 00:07:53.651 * Looking for test storage... 00:07:53.651 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@367 -- # local target_space new_size 00:07:53.651 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # for target_dir in "${storage_candidates[@]}" 00:07:53.651 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:53.651 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:53.651 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # mount=/home 00:07:53.651 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@373 -- # target_space=12524978176 00:07:53.651 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # (( target_space == 0 || target_space < requested_size )) 00:07:53.651 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space >= requested_size )) 00:07:53.651 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ btrfs == tmpfs ]] 00:07:53.651 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ btrfs == ramfs ]] 00:07:53.651 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ /home == / ]] 00:07:53.651 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@386 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:53.651 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@386 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:53.651 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:53.651 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:53.651 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # return 0 00:07:53.651 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:07:53.651 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:07:53.651 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:53.651 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:53.651 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:07:53.651 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:07:53.651 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:53.651 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:53.651 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:07:53.651 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:07:53.651 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:53.651 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:53.651 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:53.651 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:07:53.651 20:07:41 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:53.651 20:07:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:07:53.651 20:07:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:53.651 20:07:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:53.651 20:07:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:53.651 20:07:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:53.651 20:07:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:53.651 20:07:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:53.651 20:07:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:53.651 20:07:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:53.651 20:07:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:53.651 20:07:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:53.651 20:07:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:07:53.651 20:07:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:07:53.651 20:07:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:53.651 20:07:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:53.651 20:07:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:53.651 20:07:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:53.651 20:07:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:53.651 20:07:41 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:53.651 20:07:41 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:53.651 20:07:41 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:53.651 20:07:41 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.651 20:07:41 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:53.652 Cannot find device "nvmf_tgt_br" 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@155 -- # true 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:53.652 Cannot find device "nvmf_tgt_br2" 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@156 -- # true 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:53.652 Cannot find device "nvmf_tgt_br" 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@158 -- # true 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:53.652 Cannot find device "nvmf_tgt_br2" 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@159 -- # true 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:53.652 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@162 -- # true 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:53.652 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@163 -- # true 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:53.652 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:53.652 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.140 ms 00:07:53.652 00:07:53.652 --- 10.0.0.2 ping statistics --- 00:07:53.652 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:53.652 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:53.652 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:53.652 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:07:53.652 00:07:53.652 --- 10.0.0.3 ping statistics --- 00:07:53.652 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:53.652 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:53.652 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:53.652 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.050 ms 00:07:53.652 00:07:53.652 --- 10.0.0.1 ping statistics --- 00:07:53.652 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:53.652 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@433 -- # return 0 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:53.652 ************************************ 00:07:53.652 START TEST nvmf_filesystem_no_in_capsule 00:07:53.652 ************************************ 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 0 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=77752 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 77752 00:07:53.652 20:07:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:53.653 20:07:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 77752 ']' 00:07:53.653 20:07:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:53.653 20:07:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:53.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:53.653 20:07:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:53.653 20:07:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:53.653 20:07:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:53.653 [2024-07-14 20:07:41.668847] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:53.653 [2024-07-14 20:07:41.669562] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:53.653 [2024-07-14 20:07:41.808421] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:53.653 [2024-07-14 20:07:41.913743] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:53.653 [2024-07-14 20:07:41.914053] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:53.653 [2024-07-14 20:07:41.914281] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:53.653 [2024-07-14 20:07:41.914561] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:53.653 [2024-07-14 20:07:41.914686] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:53.653 [2024-07-14 20:07:41.914911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:53.653 [2024-07-14 20:07:41.915075] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:53.653 [2024-07-14 20:07:41.915632] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:53.653 [2024-07-14 20:07:41.915649] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.653 20:07:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:53.653 20:07:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:07:53.653 20:07:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:53.653 20:07:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:53.653 20:07:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:53.653 20:07:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:53.653 20:07:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:53.653 20:07:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:53.653 20:07:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.653 20:07:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:53.653 [2024-07-14 20:07:42.677938] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:53.653 20:07:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.653 20:07:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:53.653 20:07:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.653 20:07:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:53.912 Malloc1 00:07:53.912 20:07:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.912 20:07:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:53.912 20:07:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.912 20:07:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:53.912 20:07:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.912 20:07:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:53.912 20:07:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.912 20:07:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:53.912 20:07:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.912 20:07:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:53.912 20:07:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.912 20:07:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:53.912 [2024-07-14 20:07:42.869710] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:53.912 20:07:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.912 20:07:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:53.912 20:07:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:07:53.912 20:07:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:07:53.912 20:07:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:07:53.912 20:07:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:07:53.912 20:07:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:53.912 20:07:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.912 20:07:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:53.912 20:07:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.912 20:07:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:07:53.912 { 00:07:53.912 "aliases": [ 00:07:53.912 "b50024f9-6100-445f-a6ee-774752efb0db" 00:07:53.912 ], 00:07:53.912 "assigned_rate_limits": { 00:07:53.912 "r_mbytes_per_sec": 0, 00:07:53.912 "rw_ios_per_sec": 0, 00:07:53.912 "rw_mbytes_per_sec": 0, 00:07:53.912 "w_mbytes_per_sec": 0 00:07:53.912 }, 00:07:53.912 "block_size": 512, 00:07:53.912 "claim_type": "exclusive_write", 00:07:53.912 "claimed": true, 00:07:53.912 "driver_specific": {}, 00:07:53.912 "memory_domains": [ 00:07:53.912 { 00:07:53.912 "dma_device_id": "system", 00:07:53.912 "dma_device_type": 1 00:07:53.912 }, 00:07:53.912 { 00:07:53.912 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:53.912 "dma_device_type": 2 00:07:53.912 } 00:07:53.912 ], 00:07:53.912 "name": "Malloc1", 00:07:53.912 "num_blocks": 1048576, 00:07:53.912 "product_name": "Malloc disk", 00:07:53.912 "supported_io_types": { 00:07:53.912 "abort": true, 00:07:53.912 "compare": false, 00:07:53.912 "compare_and_write": false, 00:07:53.912 "flush": true, 00:07:53.912 "nvme_admin": false, 00:07:53.912 "nvme_io": false, 00:07:53.912 "read": true, 00:07:53.912 "reset": true, 00:07:53.912 "unmap": true, 00:07:53.912 "write": true, 00:07:53.912 "write_zeroes": true 00:07:53.912 }, 00:07:53.912 "uuid": "b50024f9-6100-445f-a6ee-774752efb0db", 00:07:53.912 "zoned": false 00:07:53.912 } 00:07:53.912 ]' 00:07:53.912 20:07:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:07:53.912 20:07:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:07:53.912 20:07:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:07:54.171 20:07:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:07:54.171 20:07:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:07:54.171 20:07:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:07:54.171 20:07:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:54.171 20:07:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid=caa3dfc1-79db-49e7-95fe-b9f6785698c4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:54.171 20:07:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:54.171 20:07:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:07:54.171 20:07:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:07:54.171 20:07:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:07:54.171 20:07:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:07:56.698 20:07:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:07:56.698 20:07:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:07:56.698 20:07:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:07:56.698 20:07:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:07:56.698 20:07:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:07:56.698 20:07:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:07:56.698 20:07:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:56.698 20:07:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:56.698 20:07:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:56.698 20:07:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:56.698 20:07:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:56.698 20:07:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:56.698 20:07:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:56.698 20:07:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:56.698 20:07:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:56.698 20:07:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:56.698 20:07:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:56.698 20:07:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:56.698 20:07:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:57.630 20:07:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:57.630 20:07:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:57.630 20:07:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:57.630 20:07:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:57.630 20:07:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:57.630 ************************************ 00:07:57.630 START TEST filesystem_ext4 00:07:57.630 ************************************ 00:07:57.630 20:07:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:57.630 20:07:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:57.630 20:07:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:57.630 20:07:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:57.630 20:07:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:07:57.630 20:07:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:57.630 20:07:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:07:57.630 20:07:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local force 00:07:57.630 20:07:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:07:57.630 20:07:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:07:57.630 20:07:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:57.630 mke2fs 1.46.5 (30-Dec-2021) 00:07:57.630 Discarding device blocks: 0/522240 done 00:07:57.630 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:57.630 Filesystem UUID: 99bc8993-59b3-4da9-b63e-7d8b8366c1cd 00:07:57.630 Superblock backups stored on blocks: 00:07:57.630 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:57.630 00:07:57.630 Allocating group tables: 0/64 done 00:07:57.630 Writing inode tables: 0/64 done 00:07:57.630 Creating journal (8192 blocks): done 00:07:57.630 Writing superblocks and filesystem accounting information: 0/64 done 00:07:57.630 00:07:57.630 20:07:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # return 0 00:07:57.630 20:07:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:57.630 20:07:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:57.888 20:07:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:07:57.888 20:07:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:57.888 20:07:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:07:57.888 20:07:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:57.888 20:07:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:57.888 20:07:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 77752 00:07:57.888 20:07:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:57.888 20:07:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:57.889 20:07:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:57.889 20:07:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:57.889 00:07:57.889 real 0m0.435s 00:07:57.889 user 0m0.028s 00:07:57.889 sys 0m0.065s 00:07:57.889 20:07:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:57.889 20:07:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:57.889 ************************************ 00:07:57.889 END TEST filesystem_ext4 00:07:57.889 ************************************ 00:07:57.889 20:07:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:57.889 20:07:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:57.889 20:07:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:57.889 20:07:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:57.889 ************************************ 00:07:57.889 START TEST filesystem_btrfs 00:07:57.889 ************************************ 00:07:57.889 20:07:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:57.889 20:07:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:57.889 20:07:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:57.889 20:07:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:57.889 20:07:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:07:57.889 20:07:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:57.889 20:07:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:07:57.889 20:07:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local force 00:07:57.889 20:07:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:07:57.889 20:07:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:07:57.889 20:07:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:58.146 btrfs-progs v6.6.2 00:07:58.146 See https://btrfs.readthedocs.io for more information. 00:07:58.146 00:07:58.146 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:58.146 NOTE: several default settings have changed in version 5.15, please make sure 00:07:58.146 this does not affect your deployments: 00:07:58.146 - DUP for metadata (-m dup) 00:07:58.146 - enabled no-holes (-O no-holes) 00:07:58.146 - enabled free-space-tree (-R free-space-tree) 00:07:58.146 00:07:58.146 Label: (null) 00:07:58.146 UUID: c547f2b6-255d-4a1a-9a2f-c8ad6d29217e 00:07:58.146 Node size: 16384 00:07:58.146 Sector size: 4096 00:07:58.146 Filesystem size: 510.00MiB 00:07:58.146 Block group profiles: 00:07:58.146 Data: single 8.00MiB 00:07:58.146 Metadata: DUP 32.00MiB 00:07:58.146 System: DUP 8.00MiB 00:07:58.146 SSD detected: yes 00:07:58.146 Zoned device: no 00:07:58.146 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:58.146 Runtime features: free-space-tree 00:07:58.146 Checksum: crc32c 00:07:58.146 Number of devices: 1 00:07:58.146 Devices: 00:07:58.146 ID SIZE PATH 00:07:58.146 1 510.00MiB /dev/nvme0n1p1 00:07:58.146 00:07:58.146 20:07:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # return 0 00:07:58.146 20:07:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:58.146 20:07:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:58.146 20:07:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:07:58.146 20:07:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:58.146 20:07:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:07:58.146 20:07:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:58.146 20:07:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:58.146 20:07:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 77752 00:07:58.146 20:07:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:58.146 20:07:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:58.146 20:07:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:58.146 20:07:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:58.146 00:07:58.146 real 0m0.229s 00:07:58.146 user 0m0.022s 00:07:58.146 sys 0m0.060s 00:07:58.146 20:07:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:58.146 20:07:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:58.146 ************************************ 00:07:58.146 END TEST filesystem_btrfs 00:07:58.146 ************************************ 00:07:58.146 20:07:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:58.146 20:07:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:58.146 20:07:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:58.146 20:07:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:58.146 ************************************ 00:07:58.146 START TEST filesystem_xfs 00:07:58.146 ************************************ 00:07:58.146 20:07:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:07:58.146 20:07:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:58.146 20:07:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:58.147 20:07:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:58.147 20:07:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:07:58.147 20:07:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:58.147 20:07:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local i=0 00:07:58.147 20:07:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local force 00:07:58.147 20:07:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:07:58.147 20:07:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # force=-f 00:07:58.147 20:07:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:58.405 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:58.405 = sectsz=512 attr=2, projid32bit=1 00:07:58.405 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:58.405 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:58.405 data = bsize=4096 blocks=130560, imaxpct=25 00:07:58.405 = sunit=0 swidth=0 blks 00:07:58.405 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:58.405 log =internal log bsize=4096 blocks=16384, version=2 00:07:58.405 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:58.405 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:58.971 Discarding blocks...Done. 00:07:58.971 20:07:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # return 0 00:07:58.971 20:07:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:01.503 20:07:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:01.503 20:07:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:08:01.503 20:07:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:01.503 20:07:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:08:01.503 20:07:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:08:01.503 20:07:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:01.503 20:07:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 77752 00:08:01.503 20:07:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:01.503 20:07:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:01.503 20:07:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:01.503 20:07:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:01.503 00:08:01.503 real 0m3.158s 00:08:01.503 user 0m0.022s 00:08:01.503 sys 0m0.062s 00:08:01.503 20:07:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:01.503 20:07:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:01.503 ************************************ 00:08:01.503 END TEST filesystem_xfs 00:08:01.503 ************************************ 00:08:01.503 20:07:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:01.503 20:07:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:01.503 20:07:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:01.503 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:01.503 20:07:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:01.503 20:07:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:08:01.503 20:07:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:08:01.503 20:07:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:01.503 20:07:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:08:01.503 20:07:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:01.503 20:07:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:08:01.503 20:07:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:01.503 20:07:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:01.503 20:07:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:01.503 20:07:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:01.503 20:07:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:01.503 20:07:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 77752 00:08:01.503 20:07:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 77752 ']' 00:08:01.503 20:07:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # kill -0 77752 00:08:01.503 20:07:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # uname 00:08:01.503 20:07:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:01.503 20:07:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 77752 00:08:01.503 20:07:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:01.503 20:07:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:01.503 20:07:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 77752' 00:08:01.503 killing process with pid 77752 00:08:01.503 20:07:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@965 -- # kill 77752 00:08:01.503 20:07:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@970 -- # wait 77752 00:08:02.070 20:07:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:02.070 00:08:02.070 real 0m9.300s 00:08:02.070 user 0m35.251s 00:08:02.070 sys 0m1.436s 00:08:02.070 20:07:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:02.070 20:07:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:02.070 ************************************ 00:08:02.070 END TEST nvmf_filesystem_no_in_capsule 00:08:02.070 ************************************ 00:08:02.070 20:07:50 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:08:02.070 20:07:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:02.070 20:07:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:02.070 20:07:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:02.070 ************************************ 00:08:02.070 START TEST nvmf_filesystem_in_capsule 00:08:02.070 ************************************ 00:08:02.070 20:07:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 4096 00:08:02.070 20:07:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:08:02.070 20:07:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:02.070 20:07:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:02.070 20:07:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:02.070 20:07:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:02.070 20:07:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=78060 00:08:02.070 20:07:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 78060 00:08:02.070 20:07:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 78060 ']' 00:08:02.070 20:07:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:02.070 20:07:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:02.070 20:07:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:02.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:02.070 20:07:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:02.070 20:07:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:02.070 20:07:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:02.070 [2024-07-14 20:07:51.032756] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:08:02.070 [2024-07-14 20:07:51.032900] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:02.329 [2024-07-14 20:07:51.182635] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:02.329 [2024-07-14 20:07:51.292880] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:02.329 [2024-07-14 20:07:51.292955] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:02.329 [2024-07-14 20:07:51.292971] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:02.329 [2024-07-14 20:07:51.292983] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:02.329 [2024-07-14 20:07:51.292994] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:02.329 [2024-07-14 20:07:51.293133] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:02.329 [2024-07-14 20:07:51.293326] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:02.329 [2024-07-14 20:07:51.294162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:02.329 [2024-07-14 20:07:51.294179] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.266 20:07:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:03.266 20:07:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:08:03.266 20:07:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:03.266 20:07:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:03.266 20:07:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:03.266 20:07:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:03.266 20:07:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:03.266 20:07:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:08:03.266 20:07:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.266 20:07:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:03.266 [2024-07-14 20:07:52.127703] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:03.266 20:07:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.266 20:07:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:03.266 20:07:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.266 20:07:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:03.266 Malloc1 00:08:03.266 20:07:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.266 20:07:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:03.266 20:07:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.266 20:07:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:03.266 20:07:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.266 20:07:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:03.266 20:07:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.266 20:07:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:03.266 20:07:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.266 20:07:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:03.266 20:07:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.266 20:07:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:03.266 [2024-07-14 20:07:52.335626] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:03.266 20:07:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.266 20:07:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:03.266 20:07:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:08:03.266 20:07:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:08:03.266 20:07:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:08:03.266 20:07:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:08:03.266 20:07:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:03.266 20:07:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.266 20:07:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:03.525 20:07:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.525 20:07:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:08:03.525 { 00:08:03.525 "aliases": [ 00:08:03.525 "6c248a1c-b755-49c1-8a16-fd69691a6f2a" 00:08:03.525 ], 00:08:03.525 "assigned_rate_limits": { 00:08:03.525 "r_mbytes_per_sec": 0, 00:08:03.525 "rw_ios_per_sec": 0, 00:08:03.525 "rw_mbytes_per_sec": 0, 00:08:03.525 "w_mbytes_per_sec": 0 00:08:03.525 }, 00:08:03.525 "block_size": 512, 00:08:03.525 "claim_type": "exclusive_write", 00:08:03.525 "claimed": true, 00:08:03.525 "driver_specific": {}, 00:08:03.525 "memory_domains": [ 00:08:03.525 { 00:08:03.525 "dma_device_id": "system", 00:08:03.525 "dma_device_type": 1 00:08:03.525 }, 00:08:03.525 { 00:08:03.525 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:03.525 "dma_device_type": 2 00:08:03.525 } 00:08:03.525 ], 00:08:03.525 "name": "Malloc1", 00:08:03.525 "num_blocks": 1048576, 00:08:03.525 "product_name": "Malloc disk", 00:08:03.525 "supported_io_types": { 00:08:03.525 "abort": true, 00:08:03.525 "compare": false, 00:08:03.525 "compare_and_write": false, 00:08:03.525 "flush": true, 00:08:03.525 "nvme_admin": false, 00:08:03.525 "nvme_io": false, 00:08:03.525 "read": true, 00:08:03.525 "reset": true, 00:08:03.525 "unmap": true, 00:08:03.525 "write": true, 00:08:03.525 "write_zeroes": true 00:08:03.525 }, 00:08:03.525 "uuid": "6c248a1c-b755-49c1-8a16-fd69691a6f2a", 00:08:03.525 "zoned": false 00:08:03.525 } 00:08:03.525 ]' 00:08:03.525 20:07:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:08:03.525 20:07:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:08:03.525 20:07:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:08:03.525 20:07:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:08:03.525 20:07:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:08:03.525 20:07:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:08:03.525 20:07:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:03.525 20:07:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid=caa3dfc1-79db-49e7-95fe-b9f6785698c4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:03.821 20:07:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:03.821 20:07:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:08:03.821 20:07:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:08:03.821 20:07:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:08:03.821 20:07:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:08:05.723 20:07:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:08:05.723 20:07:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:08:05.723 20:07:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:08:05.723 20:07:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:08:05.723 20:07:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:08:05.723 20:07:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:08:05.723 20:07:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:05.723 20:07:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:05.723 20:07:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:05.723 20:07:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:05.723 20:07:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:05.723 20:07:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:05.723 20:07:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:08:05.723 20:07:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:05.723 20:07:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:05.723 20:07:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:05.723 20:07:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:05.723 20:07:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:08:05.723 20:07:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:08:07.100 20:07:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:07.100 20:07:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:07.100 20:07:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:07.100 20:07:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:07.100 20:07:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:07.100 ************************************ 00:08:07.100 START TEST filesystem_in_capsule_ext4 00:08:07.100 ************************************ 00:08:07.100 20:07:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:07.100 20:07:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:07.100 20:07:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:07.100 20:07:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:07.100 20:07:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:08:07.100 20:07:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:08:07.100 20:07:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:08:07.100 20:07:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local force 00:08:07.100 20:07:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:08:07.100 20:07:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:08:07.100 20:07:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:07.100 mke2fs 1.46.5 (30-Dec-2021) 00:08:07.100 Discarding device blocks: 0/522240 done 00:08:07.100 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:07.100 Filesystem UUID: 70018eb8-717d-4018-a2eb-bd6af2ad859a 00:08:07.100 Superblock backups stored on blocks: 00:08:07.100 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:07.100 00:08:07.100 Allocating group tables: 0/64 done 00:08:07.100 Writing inode tables: 0/64 done 00:08:07.100 Creating journal (8192 blocks): done 00:08:07.100 Writing superblocks and filesystem accounting information: 0/64 done 00:08:07.100 00:08:07.100 20:07:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # return 0 00:08:07.100 20:07:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:07.100 20:07:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:07.100 20:07:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:08:07.100 20:07:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:07.100 20:07:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:08:07.100 20:07:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:08:07.100 20:07:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:07.100 20:07:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 78060 00:08:07.100 20:07:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:07.100 20:07:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:07.100 20:07:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:07.100 20:07:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:07.100 00:08:07.100 real 0m0.358s 00:08:07.100 user 0m0.026s 00:08:07.100 sys 0m0.056s 00:08:07.100 20:07:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:07.100 20:07:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:07.100 ************************************ 00:08:07.100 END TEST filesystem_in_capsule_ext4 00:08:07.101 ************************************ 00:08:07.360 20:07:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:07.360 20:07:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:07.360 20:07:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:07.360 20:07:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:07.360 ************************************ 00:08:07.360 START TEST filesystem_in_capsule_btrfs 00:08:07.360 ************************************ 00:08:07.360 20:07:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:07.360 20:07:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:07.360 20:07:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:07.360 20:07:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:07.360 20:07:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:08:07.360 20:07:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:08:07.360 20:07:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:08:07.360 20:07:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local force 00:08:07.360 20:07:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:08:07.360 20:07:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:08:07.360 20:07:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:07.360 btrfs-progs v6.6.2 00:08:07.361 See https://btrfs.readthedocs.io for more information. 00:08:07.361 00:08:07.361 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:07.361 NOTE: several default settings have changed in version 5.15, please make sure 00:08:07.361 this does not affect your deployments: 00:08:07.361 - DUP for metadata (-m dup) 00:08:07.361 - enabled no-holes (-O no-holes) 00:08:07.361 - enabled free-space-tree (-R free-space-tree) 00:08:07.361 00:08:07.361 Label: (null) 00:08:07.361 UUID: 6e251b8d-0afa-4a84-aef8-16fa1f2264a9 00:08:07.361 Node size: 16384 00:08:07.361 Sector size: 4096 00:08:07.361 Filesystem size: 510.00MiB 00:08:07.361 Block group profiles: 00:08:07.361 Data: single 8.00MiB 00:08:07.361 Metadata: DUP 32.00MiB 00:08:07.361 System: DUP 8.00MiB 00:08:07.361 SSD detected: yes 00:08:07.361 Zoned device: no 00:08:07.361 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:07.361 Runtime features: free-space-tree 00:08:07.361 Checksum: crc32c 00:08:07.361 Number of devices: 1 00:08:07.361 Devices: 00:08:07.361 ID SIZE PATH 00:08:07.361 1 510.00MiB /dev/nvme0n1p1 00:08:07.361 00:08:07.361 20:07:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # return 0 00:08:07.361 20:07:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:07.619 20:07:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:07.619 20:07:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:08:07.619 20:07:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:07.619 20:07:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:08:07.619 20:07:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:07.619 20:07:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:07.619 20:07:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 78060 00:08:07.620 20:07:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:07.620 20:07:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:07.620 20:07:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:07.620 20:07:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:07.620 00:08:07.620 real 0m0.299s 00:08:07.620 user 0m0.025s 00:08:07.620 sys 0m0.059s 00:08:07.620 20:07:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:07.620 20:07:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:07.620 ************************************ 00:08:07.620 END TEST filesystem_in_capsule_btrfs 00:08:07.620 ************************************ 00:08:07.620 20:07:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:07.620 20:07:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:07.620 20:07:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:07.620 20:07:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:07.620 ************************************ 00:08:07.620 START TEST filesystem_in_capsule_xfs 00:08:07.620 ************************************ 00:08:07.620 20:07:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:08:07.620 20:07:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:07.620 20:07:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:07.620 20:07:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:07.620 20:07:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:08:07.620 20:07:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:08:07.620 20:07:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local i=0 00:08:07.620 20:07:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local force 00:08:07.620 20:07:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:08:07.620 20:07:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # force=-f 00:08:07.620 20:07:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:07.879 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:07.879 = sectsz=512 attr=2, projid32bit=1 00:08:07.879 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:07.879 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:07.879 data = bsize=4096 blocks=130560, imaxpct=25 00:08:07.879 = sunit=0 swidth=0 blks 00:08:07.879 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:07.879 log =internal log bsize=4096 blocks=16384, version=2 00:08:07.879 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:07.879 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:08.446 Discarding blocks...Done. 00:08:08.446 20:07:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # return 0 00:08:08.446 20:07:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:10.350 20:07:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:10.350 20:07:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:08:10.350 20:07:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:10.350 20:07:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:08:10.350 20:07:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:08:10.350 20:07:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:10.350 20:07:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 78060 00:08:10.350 20:07:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:10.350 20:07:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:10.350 20:07:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:10.350 20:07:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:10.350 00:08:10.350 real 0m2.696s 00:08:10.350 user 0m0.023s 00:08:10.350 sys 0m0.054s 00:08:10.350 20:07:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:10.350 20:07:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:10.350 ************************************ 00:08:10.350 END TEST filesystem_in_capsule_xfs 00:08:10.350 ************************************ 00:08:10.350 20:07:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:10.350 20:07:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:10.350 20:07:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:10.350 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:10.350 20:07:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:10.350 20:07:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:08:10.350 20:07:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:08:10.350 20:07:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:10.350 20:07:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:08:10.350 20:07:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:10.350 20:07:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:08:10.350 20:07:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:10.350 20:07:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.350 20:07:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:10.350 20:07:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.350 20:07:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:10.350 20:07:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 78060 00:08:10.350 20:07:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 78060 ']' 00:08:10.350 20:07:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # kill -0 78060 00:08:10.350 20:07:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # uname 00:08:10.350 20:07:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:10.350 20:07:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 78060 00:08:10.608 killing process with pid 78060 00:08:10.608 20:07:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:10.608 20:07:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:10.608 20:07:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 78060' 00:08:10.608 20:07:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@965 -- # kill 78060 00:08:10.608 20:07:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@970 -- # wait 78060 00:08:10.867 ************************************ 00:08:10.867 END TEST nvmf_filesystem_in_capsule 00:08:10.867 ************************************ 00:08:10.867 20:07:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:10.867 00:08:10.867 real 0m8.905s 00:08:10.867 user 0m33.742s 00:08:10.867 sys 0m1.421s 00:08:10.867 20:07:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:10.867 20:07:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:10.867 20:07:59 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:08:10.867 20:07:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:10.867 20:07:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:08:10.867 20:07:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:10.867 20:07:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:08:10.867 20:07:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:10.867 20:07:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:10.867 rmmod nvme_tcp 00:08:11.126 rmmod nvme_fabrics 00:08:11.126 rmmod nvme_keyring 00:08:11.126 20:07:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:11.126 20:07:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:08:11.126 20:07:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:08:11.126 20:07:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:08:11.126 20:07:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:11.126 20:07:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:11.126 20:07:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:11.126 20:07:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:11.126 20:07:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:11.126 20:07:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:11.126 20:07:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:11.126 20:07:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:11.126 20:08:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:11.126 00:08:11.126 real 0m19.011s 00:08:11.126 user 1m9.234s 00:08:11.126 sys 0m3.250s 00:08:11.126 ************************************ 00:08:11.126 END TEST nvmf_filesystem 00:08:11.126 ************************************ 00:08:11.126 20:08:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:11.126 20:08:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:11.126 20:08:00 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:11.126 20:08:00 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:11.126 20:08:00 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:11.126 20:08:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:11.126 ************************************ 00:08:11.126 START TEST nvmf_target_discovery 00:08:11.126 ************************************ 00:08:11.126 20:08:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:11.126 * Looking for test storage... 00:08:11.126 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:11.126 20:08:00 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:11.126 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:08:11.126 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:11.126 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:11.126 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:11.126 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:11.126 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:11.126 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:11.126 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:11.126 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:11.126 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:11.126 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:11.126 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:08:11.126 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:08:11.126 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:11.126 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:11.126 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:11.126 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:11.126 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:11.126 20:08:00 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:11.126 20:08:00 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:11.126 20:08:00 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:11.126 20:08:00 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.126 20:08:00 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.126 20:08:00 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.126 20:08:00 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:08:11.126 20:08:00 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.126 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:08:11.126 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:11.126 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:11.126 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:11.126 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:11.126 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:11.126 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:11.126 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:11.126 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:11.126 20:08:00 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:11.126 20:08:00 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:11.126 20:08:00 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:11.126 20:08:00 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:08:11.126 20:08:00 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:08:11.126 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:11.126 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:11.126 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:11.126 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:11.126 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:11.126 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:11.126 20:08:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:11.127 20:08:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:11.127 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:11.127 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:11.127 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:11.127 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:11.127 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:11.127 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:11.127 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:11.127 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:11.127 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:11.127 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:11.127 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:11.127 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:11.127 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:11.127 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:11.127 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:11.127 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:11.127 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:11.127 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:11.127 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:11.386 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:11.386 Cannot find device "nvmf_tgt_br" 00:08:11.386 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@155 -- # true 00:08:11.386 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:11.386 Cannot find device "nvmf_tgt_br2" 00:08:11.386 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@156 -- # true 00:08:11.386 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:11.386 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:11.386 Cannot find device "nvmf_tgt_br" 00:08:11.386 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@158 -- # true 00:08:11.386 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:11.386 Cannot find device "nvmf_tgt_br2" 00:08:11.386 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@159 -- # true 00:08:11.386 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:11.386 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:11.386 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:11.386 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:11.386 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@162 -- # true 00:08:11.386 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:11.386 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:11.386 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@163 -- # true 00:08:11.386 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:11.386 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:11.386 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:11.386 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:11.386 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:11.386 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:11.386 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:11.386 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:11.386 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:11.386 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:11.386 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:11.645 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:11.645 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:11.645 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:11.645 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:11.645 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:11.645 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:11.645 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:11.645 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:11.645 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:11.645 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:11.645 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:11.645 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:11.645 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:11.645 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:11.645 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.115 ms 00:08:11.645 00:08:11.645 --- 10.0.0.2 ping statistics --- 00:08:11.645 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:11.645 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:08:11.645 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:11.645 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:11.645 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:08:11.645 00:08:11.645 --- 10.0.0.3 ping statistics --- 00:08:11.645 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:11.645 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:08:11.645 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:11.645 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:11.645 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:08:11.645 00:08:11.645 --- 10.0.0.1 ping statistics --- 00:08:11.645 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:11.645 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:08:11.645 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:11.645 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@433 -- # return 0 00:08:11.645 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:11.645 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:11.645 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:11.645 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:11.645 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:11.645 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:11.645 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:11.645 20:08:00 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:11.645 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:11.645 20:08:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:11.645 20:08:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:11.645 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=78516 00:08:11.645 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:11.645 20:08:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 78516 00:08:11.645 20:08:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@827 -- # '[' -z 78516 ']' 00:08:11.645 20:08:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:11.645 20:08:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:11.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:11.645 20:08:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:11.645 20:08:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:11.645 20:08:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:11.645 [2024-07-14 20:08:00.674486] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:08:11.645 [2024-07-14 20:08:00.674580] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:11.903 [2024-07-14 20:08:00.813556] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:11.903 [2024-07-14 20:08:00.917166] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:11.903 [2024-07-14 20:08:00.917221] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:11.903 [2024-07-14 20:08:00.917233] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:11.903 [2024-07-14 20:08:00.917242] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:11.903 [2024-07-14 20:08:00.917249] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:11.903 [2024-07-14 20:08:00.918019] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:11.903 [2024-07-14 20:08:00.918095] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:11.903 [2024-07-14 20:08:00.918778] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:11.903 [2024-07-14 20:08:00.918788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.836 20:08:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:12.836 20:08:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@860 -- # return 0 00:08:12.836 20:08:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:12.836 20:08:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:12.836 20:08:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:12.836 20:08:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:12.836 20:08:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:12.836 20:08:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.836 20:08:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:12.836 [2024-07-14 20:08:01.708372] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:12.836 20:08:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.836 20:08:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:08:12.836 20:08:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:12.836 20:08:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:12.836 20:08:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.836 20:08:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:12.836 Null1 00:08:12.836 20:08:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.836 20:08:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:12.836 20:08:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.836 20:08:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:12.836 20:08:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.836 20:08:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:12.836 20:08:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.836 20:08:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:12.836 20:08:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.836 20:08:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:12.836 20:08:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.836 20:08:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:12.836 [2024-07-14 20:08:01.779647] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:12.836 20:08:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.836 20:08:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:12.836 20:08:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:12.836 20:08:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.836 20:08:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:12.836 Null2 00:08:12.836 20:08:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.836 20:08:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:12.836 20:08:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.836 20:08:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:12.836 20:08:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.836 20:08:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:12.836 20:08:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.836 20:08:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:12.836 20:08:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.836 20:08:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:12.836 20:08:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.836 20:08:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:12.836 20:08:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.836 20:08:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:12.836 20:08:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:12.836 20:08:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.836 20:08:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:12.836 Null3 00:08:12.836 20:08:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.836 20:08:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:12.836 20:08:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.836 20:08:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:12.836 20:08:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.836 20:08:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:12.836 20:08:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.836 20:08:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:12.836 20:08:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.836 20:08:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:12.836 20:08:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.836 20:08:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:12.836 20:08:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.836 20:08:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:12.836 20:08:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:12.836 20:08:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.836 20:08:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:12.836 Null4 00:08:12.836 20:08:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.836 20:08:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:12.836 20:08:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.836 20:08:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:12.836 20:08:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.836 20:08:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:12.836 20:08:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.836 20:08:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:12.836 20:08:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.836 20:08:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:12.836 20:08:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.836 20:08:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:12.836 20:08:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.836 20:08:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:12.836 20:08:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.836 20:08:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:12.836 20:08:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.836 20:08:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:12.836 20:08:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.836 20:08:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:12.836 20:08:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.836 20:08:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid=caa3dfc1-79db-49e7-95fe-b9f6785698c4 -t tcp -a 10.0.0.2 -s 4420 00:08:13.105 00:08:13.105 Discovery Log Number of Records 6, Generation counter 6 00:08:13.105 =====Discovery Log Entry 0====== 00:08:13.105 trtype: tcp 00:08:13.105 adrfam: ipv4 00:08:13.105 subtype: current discovery subsystem 00:08:13.105 treq: not required 00:08:13.105 portid: 0 00:08:13.105 trsvcid: 4420 00:08:13.105 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:13.105 traddr: 10.0.0.2 00:08:13.105 eflags: explicit discovery connections, duplicate discovery information 00:08:13.105 sectype: none 00:08:13.105 =====Discovery Log Entry 1====== 00:08:13.105 trtype: tcp 00:08:13.105 adrfam: ipv4 00:08:13.105 subtype: nvme subsystem 00:08:13.105 treq: not required 00:08:13.105 portid: 0 00:08:13.105 trsvcid: 4420 00:08:13.105 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:13.105 traddr: 10.0.0.2 00:08:13.105 eflags: none 00:08:13.105 sectype: none 00:08:13.105 =====Discovery Log Entry 2====== 00:08:13.105 trtype: tcp 00:08:13.105 adrfam: ipv4 00:08:13.105 subtype: nvme subsystem 00:08:13.105 treq: not required 00:08:13.105 portid: 0 00:08:13.105 trsvcid: 4420 00:08:13.105 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:13.105 traddr: 10.0.0.2 00:08:13.105 eflags: none 00:08:13.105 sectype: none 00:08:13.105 =====Discovery Log Entry 3====== 00:08:13.105 trtype: tcp 00:08:13.105 adrfam: ipv4 00:08:13.105 subtype: nvme subsystem 00:08:13.105 treq: not required 00:08:13.105 portid: 0 00:08:13.105 trsvcid: 4420 00:08:13.105 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:13.105 traddr: 10.0.0.2 00:08:13.105 eflags: none 00:08:13.105 sectype: none 00:08:13.105 =====Discovery Log Entry 4====== 00:08:13.105 trtype: tcp 00:08:13.105 adrfam: ipv4 00:08:13.105 subtype: nvme subsystem 00:08:13.105 treq: not required 00:08:13.105 portid: 0 00:08:13.105 trsvcid: 4420 00:08:13.105 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:13.105 traddr: 10.0.0.2 00:08:13.105 eflags: none 00:08:13.105 sectype: none 00:08:13.105 =====Discovery Log Entry 5====== 00:08:13.105 trtype: tcp 00:08:13.105 adrfam: ipv4 00:08:13.105 subtype: discovery subsystem referral 00:08:13.105 treq: not required 00:08:13.105 portid: 0 00:08:13.105 trsvcid: 4430 00:08:13.105 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:13.105 traddr: 10.0.0.2 00:08:13.105 eflags: none 00:08:13.105 sectype: none 00:08:13.105 20:08:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:13.105 Perform nvmf subsystem discovery via RPC 00:08:13.105 20:08:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:13.105 20:08:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.105 20:08:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:13.105 [ 00:08:13.105 { 00:08:13.105 "allow_any_host": true, 00:08:13.105 "hosts": [], 00:08:13.105 "listen_addresses": [ 00:08:13.105 { 00:08:13.105 "adrfam": "IPv4", 00:08:13.105 "traddr": "10.0.0.2", 00:08:13.105 "trsvcid": "4420", 00:08:13.105 "trtype": "TCP" 00:08:13.105 } 00:08:13.105 ], 00:08:13.105 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:13.105 "subtype": "Discovery" 00:08:13.105 }, 00:08:13.105 { 00:08:13.105 "allow_any_host": true, 00:08:13.105 "hosts": [], 00:08:13.105 "listen_addresses": [ 00:08:13.105 { 00:08:13.105 "adrfam": "IPv4", 00:08:13.105 "traddr": "10.0.0.2", 00:08:13.105 "trsvcid": "4420", 00:08:13.105 "trtype": "TCP" 00:08:13.105 } 00:08:13.105 ], 00:08:13.105 "max_cntlid": 65519, 00:08:13.105 "max_namespaces": 32, 00:08:13.105 "min_cntlid": 1, 00:08:13.105 "model_number": "SPDK bdev Controller", 00:08:13.105 "namespaces": [ 00:08:13.105 { 00:08:13.105 "bdev_name": "Null1", 00:08:13.105 "name": "Null1", 00:08:13.105 "nguid": "C94D8358A2344620AF1D68CFBC21FB17", 00:08:13.105 "nsid": 1, 00:08:13.105 "uuid": "c94d8358-a234-4620-af1d-68cfbc21fb17" 00:08:13.105 } 00:08:13.105 ], 00:08:13.105 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:13.105 "serial_number": "SPDK00000000000001", 00:08:13.105 "subtype": "NVMe" 00:08:13.105 }, 00:08:13.105 { 00:08:13.105 "allow_any_host": true, 00:08:13.105 "hosts": [], 00:08:13.105 "listen_addresses": [ 00:08:13.105 { 00:08:13.105 "adrfam": "IPv4", 00:08:13.105 "traddr": "10.0.0.2", 00:08:13.105 "trsvcid": "4420", 00:08:13.105 "trtype": "TCP" 00:08:13.105 } 00:08:13.105 ], 00:08:13.105 "max_cntlid": 65519, 00:08:13.105 "max_namespaces": 32, 00:08:13.105 "min_cntlid": 1, 00:08:13.105 "model_number": "SPDK bdev Controller", 00:08:13.105 "namespaces": [ 00:08:13.105 { 00:08:13.105 "bdev_name": "Null2", 00:08:13.105 "name": "Null2", 00:08:13.105 "nguid": "ADBCF5B3BB7846DEB1C4231C038AA715", 00:08:13.105 "nsid": 1, 00:08:13.105 "uuid": "adbcf5b3-bb78-46de-b1c4-231c038aa715" 00:08:13.105 } 00:08:13.105 ], 00:08:13.105 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:13.105 "serial_number": "SPDK00000000000002", 00:08:13.105 "subtype": "NVMe" 00:08:13.105 }, 00:08:13.105 { 00:08:13.105 "allow_any_host": true, 00:08:13.105 "hosts": [], 00:08:13.105 "listen_addresses": [ 00:08:13.105 { 00:08:13.105 "adrfam": "IPv4", 00:08:13.105 "traddr": "10.0.0.2", 00:08:13.105 "trsvcid": "4420", 00:08:13.105 "trtype": "TCP" 00:08:13.105 } 00:08:13.105 ], 00:08:13.105 "max_cntlid": 65519, 00:08:13.105 "max_namespaces": 32, 00:08:13.105 "min_cntlid": 1, 00:08:13.105 "model_number": "SPDK bdev Controller", 00:08:13.105 "namespaces": [ 00:08:13.105 { 00:08:13.105 "bdev_name": "Null3", 00:08:13.105 "name": "Null3", 00:08:13.105 "nguid": "22243FD75BDF4CCE921ECC50FECAF786", 00:08:13.105 "nsid": 1, 00:08:13.105 "uuid": "22243fd7-5bdf-4cce-921e-cc50fecaf786" 00:08:13.105 } 00:08:13.105 ], 00:08:13.105 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:13.105 "serial_number": "SPDK00000000000003", 00:08:13.105 "subtype": "NVMe" 00:08:13.105 }, 00:08:13.105 { 00:08:13.105 "allow_any_host": true, 00:08:13.105 "hosts": [], 00:08:13.105 "listen_addresses": [ 00:08:13.105 { 00:08:13.105 "adrfam": "IPv4", 00:08:13.105 "traddr": "10.0.0.2", 00:08:13.105 "trsvcid": "4420", 00:08:13.105 "trtype": "TCP" 00:08:13.105 } 00:08:13.105 ], 00:08:13.105 "max_cntlid": 65519, 00:08:13.105 "max_namespaces": 32, 00:08:13.105 "min_cntlid": 1, 00:08:13.105 "model_number": "SPDK bdev Controller", 00:08:13.105 "namespaces": [ 00:08:13.105 { 00:08:13.105 "bdev_name": "Null4", 00:08:13.105 "name": "Null4", 00:08:13.105 "nguid": "C0D67D4910D74D1A8C805829296C4474", 00:08:13.105 "nsid": 1, 00:08:13.105 "uuid": "c0d67d49-10d7-4d1a-8c80-5829296c4474" 00:08:13.106 } 00:08:13.106 ], 00:08:13.106 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:13.106 "serial_number": "SPDK00000000000004", 00:08:13.106 "subtype": "NVMe" 00:08:13.106 } 00:08:13.106 ] 00:08:13.106 20:08:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.106 20:08:02 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:08:13.106 20:08:02 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:13.106 20:08:02 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:13.106 20:08:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.106 20:08:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:13.106 20:08:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.106 20:08:02 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:13.106 20:08:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.106 20:08:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:13.106 20:08:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.106 20:08:02 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:13.106 20:08:02 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:13.106 20:08:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.106 20:08:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:13.106 20:08:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.106 20:08:02 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:13.106 20:08:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.106 20:08:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:13.106 20:08:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.106 20:08:02 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:13.106 20:08:02 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:13.106 20:08:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.106 20:08:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:13.106 20:08:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.106 20:08:02 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:13.106 20:08:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.106 20:08:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:13.106 20:08:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.106 20:08:02 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:13.106 20:08:02 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:13.106 20:08:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.106 20:08:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:13.106 20:08:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.106 20:08:02 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:13.106 20:08:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.106 20:08:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:13.106 20:08:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.106 20:08:02 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:13.106 20:08:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.106 20:08:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:13.106 20:08:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.106 20:08:02 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:13.106 20:08:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.106 20:08:02 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:13.106 20:08:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:13.106 20:08:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.106 20:08:02 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:08:13.106 20:08:02 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:13.106 20:08:02 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:13.106 20:08:02 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:08:13.106 20:08:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:13.106 20:08:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:08:13.106 20:08:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:13.106 20:08:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:08:13.106 20:08:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:13.106 20:08:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:13.376 rmmod nvme_tcp 00:08:13.376 rmmod nvme_fabrics 00:08:13.376 rmmod nvme_keyring 00:08:13.376 20:08:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:13.376 20:08:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:08:13.376 20:08:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:08:13.376 20:08:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 78516 ']' 00:08:13.376 20:08:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 78516 00:08:13.376 20:08:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@946 -- # '[' -z 78516 ']' 00:08:13.376 20:08:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@950 -- # kill -0 78516 00:08:13.376 20:08:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # uname 00:08:13.376 20:08:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:13.376 20:08:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 78516 00:08:13.376 killing process with pid 78516 00:08:13.376 20:08:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:13.376 20:08:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:13.376 20:08:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 78516' 00:08:13.376 20:08:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@965 -- # kill 78516 00:08:13.376 20:08:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@970 -- # wait 78516 00:08:13.635 20:08:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:13.635 20:08:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:13.635 20:08:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:13.635 20:08:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:13.635 20:08:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:13.635 20:08:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:13.635 20:08:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:13.635 20:08:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:13.635 20:08:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:13.635 ************************************ 00:08:13.635 END TEST nvmf_target_discovery 00:08:13.635 ************************************ 00:08:13.635 00:08:13.635 real 0m2.454s 00:08:13.635 user 0m6.618s 00:08:13.635 sys 0m0.630s 00:08:13.635 20:08:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:13.635 20:08:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:13.635 20:08:02 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:13.635 20:08:02 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:13.635 20:08:02 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:13.635 20:08:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:13.635 ************************************ 00:08:13.635 START TEST nvmf_referrals 00:08:13.635 ************************************ 00:08:13.635 20:08:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:13.635 * Looking for test storage... 00:08:13.635 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:13.635 20:08:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:13.635 20:08:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:08:13.635 20:08:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:13.635 20:08:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:13.635 20:08:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:13.635 20:08:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:13.635 20:08:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:13.635 20:08:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:13.635 20:08:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:13.635 20:08:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:13.635 20:08:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:13.635 20:08:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:13.635 20:08:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:08:13.635 20:08:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:08:13.635 20:08:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:13.635 20:08:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:13.635 20:08:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:13.635 20:08:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:13.635 20:08:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:13.635 20:08:02 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:13.635 20:08:02 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:13.635 20:08:02 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:13.635 20:08:02 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.635 20:08:02 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.635 20:08:02 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.635 20:08:02 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:08:13.635 20:08:02 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.635 20:08:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:08:13.635 20:08:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:13.635 20:08:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:13.635 20:08:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:13.635 20:08:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:13.635 20:08:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:13.635 20:08:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:13.635 20:08:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:13.635 20:08:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:13.635 20:08:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:13.635 20:08:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:13.635 20:08:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:13.635 20:08:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:13.635 20:08:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:13.635 20:08:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:13.635 20:08:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:08:13.635 20:08:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:13.635 20:08:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:13.635 20:08:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:13.635 20:08:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:13.635 20:08:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:13.635 20:08:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:13.635 20:08:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:13.635 20:08:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:13.635 20:08:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:13.635 20:08:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:13.635 20:08:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:13.635 20:08:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:13.635 20:08:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:13.635 20:08:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:13.635 20:08:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:13.635 20:08:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:13.635 20:08:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:13.635 20:08:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:13.636 20:08:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:13.636 20:08:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:13.636 20:08:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:13.636 20:08:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:13.636 20:08:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:13.636 20:08:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:13.636 20:08:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:13.636 20:08:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:13.636 20:08:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:13.894 20:08:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:13.894 Cannot find device "nvmf_tgt_br" 00:08:13.894 20:08:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@155 -- # true 00:08:13.894 20:08:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:13.894 Cannot find device "nvmf_tgt_br2" 00:08:13.894 20:08:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@156 -- # true 00:08:13.894 20:08:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:13.894 20:08:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:13.894 Cannot find device "nvmf_tgt_br" 00:08:13.894 20:08:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@158 -- # true 00:08:13.894 20:08:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:13.894 Cannot find device "nvmf_tgt_br2" 00:08:13.894 20:08:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@159 -- # true 00:08:13.894 20:08:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:13.894 20:08:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:13.894 20:08:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:13.894 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:13.894 20:08:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@162 -- # true 00:08:13.894 20:08:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:13.894 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:13.894 20:08:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@163 -- # true 00:08:13.894 20:08:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:13.894 20:08:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:13.894 20:08:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:13.894 20:08:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:13.894 20:08:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:13.894 20:08:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:13.894 20:08:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:13.894 20:08:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:13.894 20:08:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:13.894 20:08:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:13.894 20:08:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:13.894 20:08:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:14.153 20:08:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:14.153 20:08:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:14.153 20:08:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:14.153 20:08:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:14.153 20:08:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:14.153 20:08:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:14.153 20:08:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:14.153 20:08:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:14.153 20:08:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:14.153 20:08:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:14.153 20:08:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:14.153 20:08:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:14.153 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:14.153 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.102 ms 00:08:14.153 00:08:14.153 --- 10.0.0.2 ping statistics --- 00:08:14.153 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:14.153 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:08:14.153 20:08:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:14.153 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:14.153 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:08:14.153 00:08:14.153 --- 10.0.0.3 ping statistics --- 00:08:14.153 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:14.153 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:08:14.153 20:08:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:14.153 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:14.153 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:08:14.153 00:08:14.153 --- 10.0.0.1 ping statistics --- 00:08:14.153 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:14.153 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:08:14.153 20:08:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:14.153 20:08:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@433 -- # return 0 00:08:14.153 20:08:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:14.153 20:08:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:14.153 20:08:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:14.153 20:08:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:14.153 20:08:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:14.153 20:08:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:14.153 20:08:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:14.153 20:08:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:14.153 20:08:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:14.153 20:08:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:14.153 20:08:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:14.153 20:08:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=78744 00:08:14.153 20:08:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:14.153 20:08:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 78744 00:08:14.153 20:08:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@827 -- # '[' -z 78744 ']' 00:08:14.153 20:08:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:14.153 20:08:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:14.153 20:08:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:14.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:14.153 20:08:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:14.153 20:08:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:14.153 [2024-07-14 20:08:03.175685] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:08:14.153 [2024-07-14 20:08:03.175820] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:14.410 [2024-07-14 20:08:03.324165] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:14.410 [2024-07-14 20:08:03.426832] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:14.410 [2024-07-14 20:08:03.427228] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:14.410 [2024-07-14 20:08:03.427375] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:14.410 [2024-07-14 20:08:03.427576] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:14.410 [2024-07-14 20:08:03.427694] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:14.410 [2024-07-14 20:08:03.427943] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:14.410 [2024-07-14 20:08:03.428019] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:14.410 [2024-07-14 20:08:03.428521] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:14.410 [2024-07-14 20:08:03.428533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.344 20:08:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:15.344 20:08:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@860 -- # return 0 00:08:15.344 20:08:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:15.344 20:08:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:15.344 20:08:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:15.344 20:08:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:15.344 20:08:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:15.344 20:08:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.344 20:08:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:15.344 [2024-07-14 20:08:04.184182] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:15.344 20:08:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.344 20:08:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:15.344 20:08:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.344 20:08:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:15.344 [2024-07-14 20:08:04.212806] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:15.344 20:08:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.344 20:08:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:15.344 20:08:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.344 20:08:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:15.344 20:08:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.344 20:08:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:15.344 20:08:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.344 20:08:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:15.344 20:08:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.344 20:08:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:15.344 20:08:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.344 20:08:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:15.344 20:08:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.344 20:08:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:15.344 20:08:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.344 20:08:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:15.344 20:08:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:08:15.344 20:08:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.344 20:08:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:15.344 20:08:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:15.344 20:08:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:15.344 20:08:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:15.344 20:08:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:15.344 20:08:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:15.344 20:08:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.344 20:08:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:15.344 20:08:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.344 20:08:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:15.344 20:08:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:15.344 20:08:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:15.344 20:08:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:15.344 20:08:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:15.344 20:08:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid=caa3dfc1-79db-49e7-95fe-b9f6785698c4 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:15.344 20:08:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:15.344 20:08:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:15.603 20:08:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:15.603 20:08:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:15.603 20:08:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:15.603 20:08:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.603 20:08:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:15.603 20:08:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.603 20:08:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:15.603 20:08:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.603 20:08:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:15.603 20:08:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.603 20:08:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:15.603 20:08:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.603 20:08:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:15.603 20:08:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.603 20:08:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:15.603 20:08:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.603 20:08:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:15.603 20:08:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:08:15.603 20:08:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.603 20:08:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:15.603 20:08:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:15.603 20:08:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:15.603 20:08:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:15.603 20:08:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:15.603 20:08:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid=caa3dfc1-79db-49e7-95fe-b9f6785698c4 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:15.603 20:08:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:15.603 20:08:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:15.603 20:08:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:15.603 20:08:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:15.603 20:08:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.603 20:08:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:15.603 20:08:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.603 20:08:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:15.603 20:08:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.603 20:08:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:15.603 20:08:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.603 20:08:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:15.603 20:08:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:15.603 20:08:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:15.603 20:08:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:15.603 20:08:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.603 20:08:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:15.603 20:08:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:15.603 20:08:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.862 20:08:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:15.862 20:08:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:15.862 20:08:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:15.862 20:08:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:15.862 20:08:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:15.862 20:08:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid=caa3dfc1-79db-49e7-95fe-b9f6785698c4 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:15.862 20:08:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:15.862 20:08:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:15.862 20:08:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:15.862 20:08:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:15.862 20:08:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:15.862 20:08:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:15.862 20:08:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:15.862 20:08:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid=caa3dfc1-79db-49e7-95fe-b9f6785698c4 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:15.862 20:08:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:15.862 20:08:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:15.862 20:08:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:15.862 20:08:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:15.862 20:08:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:15.862 20:08:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid=caa3dfc1-79db-49e7-95fe-b9f6785698c4 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:15.862 20:08:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:15.862 20:08:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:15.862 20:08:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:15.862 20:08:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.862 20:08:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:15.862 20:08:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.862 20:08:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:15.862 20:08:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:15.862 20:08:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:15.862 20:08:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:15.862 20:08:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.862 20:08:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:15.862 20:08:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:15.862 20:08:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.121 20:08:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:16.121 20:08:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:16.121 20:08:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:16.121 20:08:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:16.121 20:08:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:16.121 20:08:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid=caa3dfc1-79db-49e7-95fe-b9f6785698c4 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:16.121 20:08:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:16.121 20:08:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:16.121 20:08:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:16.121 20:08:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:16.121 20:08:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:16.121 20:08:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:16.121 20:08:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:16.121 20:08:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid=caa3dfc1-79db-49e7-95fe-b9f6785698c4 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:16.121 20:08:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:16.121 20:08:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:16.121 20:08:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:16.121 20:08:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:16.121 20:08:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:16.121 20:08:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid=caa3dfc1-79db-49e7-95fe-b9f6785698c4 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:16.121 20:08:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:16.121 20:08:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:16.121 20:08:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:16.121 20:08:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.121 20:08:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:16.121 20:08:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.121 20:08:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:16.121 20:08:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:08:16.121 20:08:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.121 20:08:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:16.380 20:08:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.380 20:08:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:16.380 20:08:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:16.380 20:08:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:16.380 20:08:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:16.380 20:08:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid=caa3dfc1-79db-49e7-95fe-b9f6785698c4 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:16.380 20:08:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:16.380 20:08:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:16.380 20:08:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:16.380 20:08:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:16.380 20:08:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:16.380 20:08:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:08:16.380 20:08:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:16.380 20:08:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:08:16.380 20:08:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:16.380 20:08:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:08:16.380 20:08:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:16.380 20:08:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:16.380 rmmod nvme_tcp 00:08:16.380 rmmod nvme_fabrics 00:08:16.380 rmmod nvme_keyring 00:08:16.380 20:08:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:16.380 20:08:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:08:16.380 20:08:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:08:16.380 20:08:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 78744 ']' 00:08:16.380 20:08:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 78744 00:08:16.380 20:08:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@946 -- # '[' -z 78744 ']' 00:08:16.380 20:08:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@950 -- # kill -0 78744 00:08:16.380 20:08:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # uname 00:08:16.380 20:08:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:16.380 20:08:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 78744 00:08:16.380 killing process with pid 78744 00:08:16.380 20:08:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:16.380 20:08:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:16.380 20:08:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@964 -- # echo 'killing process with pid 78744' 00:08:16.380 20:08:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@965 -- # kill 78744 00:08:16.380 20:08:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@970 -- # wait 78744 00:08:16.638 20:08:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:16.639 20:08:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:16.639 20:08:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:16.639 20:08:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:16.639 20:08:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:16.639 20:08:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:16.639 20:08:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:16.639 20:08:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:16.898 20:08:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:16.898 00:08:16.898 real 0m3.159s 00:08:16.898 user 0m10.007s 00:08:16.898 sys 0m0.870s 00:08:16.898 ************************************ 00:08:16.898 END TEST nvmf_referrals 00:08:16.898 ************************************ 00:08:16.898 20:08:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:16.898 20:08:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:16.898 20:08:05 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:16.898 20:08:05 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:16.898 20:08:05 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:16.898 20:08:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:16.898 ************************************ 00:08:16.898 START TEST nvmf_connect_disconnect 00:08:16.898 ************************************ 00:08:16.898 20:08:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:16.898 * Looking for test storage... 00:08:16.898 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:16.898 20:08:05 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:16.898 20:08:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:08:16.898 20:08:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:16.898 20:08:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:16.898 20:08:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:16.898 20:08:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:16.898 20:08:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:16.898 20:08:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:16.898 20:08:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:16.898 20:08:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:16.898 20:08:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:16.898 20:08:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:16.898 20:08:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:08:16.898 20:08:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:08:16.898 20:08:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:16.898 20:08:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:16.898 20:08:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:16.898 20:08:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:16.898 20:08:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:16.898 20:08:05 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:16.898 20:08:05 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:16.898 20:08:05 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:16.898 20:08:05 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.898 20:08:05 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.898 20:08:05 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.898 20:08:05 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:08:16.898 20:08:05 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.898 20:08:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:08:16.898 20:08:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:16.898 20:08:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:16.898 20:08:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:16.898 20:08:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:16.899 20:08:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:16.899 20:08:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:16.899 20:08:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:16.899 20:08:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:16.899 20:08:05 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:16.899 20:08:05 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:16.899 20:08:05 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:16.899 20:08:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:16.899 20:08:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:16.899 20:08:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:16.899 20:08:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:16.899 20:08:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:16.899 20:08:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:16.899 20:08:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:16.899 20:08:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:16.899 20:08:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:16.899 20:08:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:16.899 20:08:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:16.899 20:08:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:16.899 20:08:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:16.899 20:08:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:16.899 20:08:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:16.899 20:08:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:16.899 20:08:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:16.899 20:08:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:16.899 20:08:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:16.899 20:08:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:16.899 20:08:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:16.899 20:08:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:16.899 20:08:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:16.899 20:08:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:16.899 20:08:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:16.899 20:08:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:16.899 20:08:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:16.899 20:08:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:16.899 Cannot find device "nvmf_tgt_br" 00:08:16.899 20:08:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@155 -- # true 00:08:16.899 20:08:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:16.899 Cannot find device "nvmf_tgt_br2" 00:08:16.899 20:08:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@156 -- # true 00:08:16.899 20:08:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:16.899 20:08:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:16.899 Cannot find device "nvmf_tgt_br" 00:08:16.899 20:08:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@158 -- # true 00:08:16.899 20:08:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:16.899 Cannot find device "nvmf_tgt_br2" 00:08:16.899 20:08:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # true 00:08:16.899 20:08:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:17.157 20:08:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:17.157 20:08:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:17.157 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:17.157 20:08:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # true 00:08:17.157 20:08:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:17.157 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:17.157 20:08:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # true 00:08:17.157 20:08:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:17.157 20:08:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:17.157 20:08:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:17.157 20:08:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:17.157 20:08:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:17.157 20:08:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:17.157 20:08:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:17.157 20:08:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:17.157 20:08:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:17.157 20:08:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:17.157 20:08:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:17.157 20:08:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:17.157 20:08:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:17.157 20:08:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:17.157 20:08:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:17.157 20:08:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:17.157 20:08:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:17.157 20:08:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:17.157 20:08:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:17.157 20:08:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:17.157 20:08:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:17.157 20:08:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:17.157 20:08:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:17.158 20:08:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:17.158 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:17.158 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.099 ms 00:08:17.158 00:08:17.158 --- 10.0.0.2 ping statistics --- 00:08:17.158 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:17.158 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:08:17.416 20:08:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:17.416 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:17.416 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:08:17.416 00:08:17.416 --- 10.0.0.3 ping statistics --- 00:08:17.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:17.416 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:08:17.416 20:08:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:17.416 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:17.416 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:08:17.416 00:08:17.416 --- 10.0.0.1 ping statistics --- 00:08:17.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:17.416 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:08:17.416 20:08:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:17.416 20:08:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@433 -- # return 0 00:08:17.416 20:08:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:17.416 20:08:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:17.416 20:08:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:17.416 20:08:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:17.416 20:08:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:17.416 20:08:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:17.416 20:08:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:17.416 20:08:06 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:17.416 20:08:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:17.416 20:08:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:17.416 20:08:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:17.416 20:08:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=79050 00:08:17.416 20:08:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 79050 00:08:17.416 20:08:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:17.416 20:08:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@827 -- # '[' -z 79050 ']' 00:08:17.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:17.416 20:08:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:17.416 20:08:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:17.416 20:08:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:17.416 20:08:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:17.416 20:08:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:17.416 [2024-07-14 20:08:06.334241] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:08:17.416 [2024-07-14 20:08:06.334566] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:17.416 [2024-07-14 20:08:06.471415] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:17.675 [2024-07-14 20:08:06.576047] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:17.675 [2024-07-14 20:08:06.576370] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:17.675 [2024-07-14 20:08:06.576390] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:17.675 [2024-07-14 20:08:06.576400] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:17.675 [2024-07-14 20:08:06.576407] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:17.675 [2024-07-14 20:08:06.576507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:17.675 [2024-07-14 20:08:06.576737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:17.675 [2024-07-14 20:08:06.577618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:17.675 [2024-07-14 20:08:06.577634] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.611 20:08:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:18.611 20:08:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # return 0 00:08:18.611 20:08:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:18.611 20:08:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:18.611 20:08:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:18.612 20:08:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:18.612 20:08:07 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:18.612 20:08:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.612 20:08:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:18.612 [2024-07-14 20:08:07.374768] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:18.612 20:08:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.612 20:08:07 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:18.612 20:08:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.612 20:08:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:18.612 20:08:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.612 20:08:07 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:18.612 20:08:07 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:18.612 20:08:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.612 20:08:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:18.612 20:08:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.612 20:08:07 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:18.612 20:08:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.612 20:08:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:18.612 20:08:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.612 20:08:07 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:18.612 20:08:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.612 20:08:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:18.612 [2024-07-14 20:08:07.453982] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:18.612 20:08:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.612 20:08:07 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:08:18.612 20:08:07 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:08:18.612 20:08:07 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:08:18.612 20:08:07 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:08:21.141 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:23.069 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:25.592 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:27.490 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:30.007 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:31.899 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:34.417 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:36.939 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:38.835 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:41.360 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:43.258 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:45.786 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:47.686 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:50.214 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:52.116 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:54.649 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:56.549 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:59.079 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:00.983 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:03.514 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:06.042 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:07.942 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:10.459 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:12.361 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:14.900 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:16.814 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:19.347 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:21.251 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:23.787 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:25.688 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:28.220 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:30.751 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:32.651 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:35.210 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:37.113 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:39.645 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:41.548 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:44.080 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:45.981 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:48.514 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:50.419 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:52.990 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:54.909 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:57.441 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:59.344 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:01.894 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:03.808 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:06.342 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:08.244 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:10.778 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:13.310 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:15.208 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:17.801 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:20.331 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:22.230 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:24.762 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:26.663 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:29.196 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:31.100 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:33.627 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:35.527 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:38.060 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:39.969 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:42.501 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:44.403 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:46.933 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:49.494 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:51.396 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:53.928 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:55.826 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:58.352 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:00.250 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:02.789 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:05.320 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:07.221 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:09.755 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:11.699 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:14.228 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:16.125 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:18.659 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:20.557 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:23.087 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:24.987 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:27.517 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:29.421 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:31.967 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:33.882 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:36.415 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:38.316 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:40.845 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:42.745 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:45.285 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:47.184 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:49.714 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:51.614 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:54.145 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:56.674 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:58.577 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:01.111 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:03.040 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:03.299 20:11:52 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:03.299 20:11:52 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:03.299 20:11:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:03.299 20:11:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:12:03.299 20:11:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:03.299 20:11:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:12:03.299 20:11:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:03.299 20:11:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:03.299 rmmod nvme_tcp 00:12:03.299 rmmod nvme_fabrics 00:12:03.299 rmmod nvme_keyring 00:12:03.299 20:11:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:03.299 20:11:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:12:03.299 20:11:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:12:03.299 20:11:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 79050 ']' 00:12:03.299 20:11:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 79050 00:12:03.299 20:11:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@946 -- # '[' -z 79050 ']' 00:12:03.299 20:11:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # kill -0 79050 00:12:03.299 20:11:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # uname 00:12:03.299 20:11:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:03.299 20:11:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 79050 00:12:03.299 killing process with pid 79050 00:12:03.299 20:11:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:03.299 20:11:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:03.299 20:11:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # echo 'killing process with pid 79050' 00:12:03.299 20:11:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@965 -- # kill 79050 00:12:03.299 20:11:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@970 -- # wait 79050 00:12:03.558 20:11:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:03.558 20:11:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:03.558 20:11:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:03.558 20:11:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:03.558 20:11:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:03.558 20:11:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:03.558 20:11:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:03.558 20:11:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:03.817 20:11:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:03.817 00:12:03.817 real 3m46.851s 00:12:03.817 user 14m45.739s 00:12:03.818 sys 0m19.849s 00:12:03.818 20:11:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:03.818 ************************************ 00:12:03.818 END TEST nvmf_connect_disconnect 00:12:03.818 ************************************ 00:12:03.818 20:11:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:03.818 20:11:52 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:03.818 20:11:52 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:03.818 20:11:52 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:03.818 20:11:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:03.818 ************************************ 00:12:03.818 START TEST nvmf_multitarget 00:12:03.818 ************************************ 00:12:03.818 20:11:52 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:03.818 * Looking for test storage... 00:12:03.818 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:03.818 20:11:52 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:03.818 20:11:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:03.818 20:11:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:03.818 20:11:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:03.818 20:11:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:03.818 20:11:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:03.818 20:11:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:03.818 20:11:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:03.818 20:11:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:03.818 20:11:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:03.818 20:11:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:03.818 20:11:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:03.818 20:11:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:12:03.818 20:11:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:12:03.818 20:11:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:03.818 20:11:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:03.818 20:11:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:03.818 20:11:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:03.818 20:11:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:03.818 20:11:52 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:03.818 20:11:52 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:03.818 20:11:52 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:03.818 20:11:52 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.818 20:11:52 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.818 20:11:52 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.818 20:11:52 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:03.818 20:11:52 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.818 20:11:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:12:03.818 20:11:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:03.818 20:11:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:03.818 20:11:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:03.818 20:11:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:03.818 20:11:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:03.818 20:11:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:03.818 20:11:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:03.818 20:11:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:03.818 20:11:52 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:12:03.818 20:11:52 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:03.818 20:11:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:03.818 20:11:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:03.818 20:11:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:03.818 20:11:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:03.818 20:11:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:03.818 20:11:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:03.818 20:11:52 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:03.818 20:11:52 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:03.818 20:11:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:03.818 20:11:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:03.818 20:11:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:03.818 20:11:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:03.818 20:11:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:03.818 20:11:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:03.818 20:11:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:03.818 20:11:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:03.818 20:11:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:03.818 20:11:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:03.818 20:11:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:03.818 20:11:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:03.818 20:11:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:03.818 20:11:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:03.818 20:11:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:03.818 20:11:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:03.818 20:11:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:03.818 20:11:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:03.818 20:11:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:03.818 20:11:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:03.818 Cannot find device "nvmf_tgt_br" 00:12:03.818 20:11:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@155 -- # true 00:12:03.818 20:11:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:03.818 Cannot find device "nvmf_tgt_br2" 00:12:03.818 20:11:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@156 -- # true 00:12:03.818 20:11:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:03.818 20:11:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:03.818 Cannot find device "nvmf_tgt_br" 00:12:03.818 20:11:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@158 -- # true 00:12:03.818 20:11:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:03.818 Cannot find device "nvmf_tgt_br2" 00:12:03.818 20:11:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@159 -- # true 00:12:03.818 20:11:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:04.078 20:11:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:04.078 20:11:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:04.078 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:04.078 20:11:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@162 -- # true 00:12:04.078 20:11:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:04.078 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:04.078 20:11:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@163 -- # true 00:12:04.078 20:11:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:04.078 20:11:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:04.078 20:11:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:04.078 20:11:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:04.078 20:11:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:04.078 20:11:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:04.078 20:11:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:04.078 20:11:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:04.078 20:11:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:04.078 20:11:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:04.078 20:11:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:04.078 20:11:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:04.078 20:11:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:04.078 20:11:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:04.078 20:11:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:04.078 20:11:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:04.078 20:11:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:04.078 20:11:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:04.078 20:11:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:04.078 20:11:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:04.078 20:11:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:04.078 20:11:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:04.078 20:11:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:04.078 20:11:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:04.078 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:04.078 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.099 ms 00:12:04.078 00:12:04.078 --- 10.0.0.2 ping statistics --- 00:12:04.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:04.078 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:12:04.078 20:11:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:04.078 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:04.078 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:12:04.078 00:12:04.078 --- 10.0.0.3 ping statistics --- 00:12:04.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:04.078 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:12:04.078 20:11:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:04.078 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:04.078 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:12:04.078 00:12:04.078 --- 10.0.0.1 ping statistics --- 00:12:04.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:04.078 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:12:04.078 20:11:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:04.078 20:11:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@433 -- # return 0 00:12:04.078 20:11:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:04.078 20:11:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:04.078 20:11:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:04.078 20:11:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:04.078 20:11:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:04.078 20:11:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:04.078 20:11:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:04.343 20:11:53 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:04.343 20:11:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:04.343 20:11:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:04.343 20:11:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:04.343 20:11:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=82839 00:12:04.343 20:11:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 82839 00:12:04.343 20:11:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@827 -- # '[' -z 82839 ']' 00:12:04.343 20:11:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:04.343 20:11:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:04.343 20:11:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:04.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:04.343 20:11:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:04.343 20:11:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:04.343 20:11:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:04.343 [2024-07-14 20:11:53.229571] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:12:04.343 [2024-07-14 20:11:53.229684] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:04.343 [2024-07-14 20:11:53.374131] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:04.600 [2024-07-14 20:11:53.468998] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:04.600 [2024-07-14 20:11:53.469349] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:04.600 [2024-07-14 20:11:53.469509] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:04.600 [2024-07-14 20:11:53.469562] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:04.600 [2024-07-14 20:11:53.469674] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:04.600 [2024-07-14 20:11:53.469936] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:04.600 [2024-07-14 20:11:53.470075] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:04.600 [2024-07-14 20:11:53.470164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:04.600 [2024-07-14 20:11:53.470164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:05.165 20:11:54 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:05.165 20:11:54 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@860 -- # return 0 00:12:05.165 20:11:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:05.165 20:11:54 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:05.165 20:11:54 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:05.165 20:11:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:05.165 20:11:54 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:05.165 20:11:54 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:05.165 20:11:54 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:05.422 20:11:54 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:05.423 20:11:54 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:05.423 "nvmf_tgt_1" 00:12:05.423 20:11:54 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:05.681 "nvmf_tgt_2" 00:12:05.681 20:11:54 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:05.681 20:11:54 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:05.681 20:11:54 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:05.681 20:11:54 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:05.939 true 00:12:05.939 20:11:54 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:05.939 true 00:12:06.197 20:11:55 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:06.198 20:11:55 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:06.198 20:11:55 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:06.198 20:11:55 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:06.198 20:11:55 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:06.198 20:11:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:06.198 20:11:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:12:06.198 20:11:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:06.198 20:11:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:12:06.198 20:11:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:06.198 20:11:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:06.198 rmmod nvme_tcp 00:12:06.198 rmmod nvme_fabrics 00:12:06.198 rmmod nvme_keyring 00:12:06.198 20:11:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:06.198 20:11:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:12:06.198 20:11:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:12:06.198 20:11:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 82839 ']' 00:12:06.198 20:11:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 82839 00:12:06.198 20:11:55 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@946 -- # '[' -z 82839 ']' 00:12:06.198 20:11:55 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@950 -- # kill -0 82839 00:12:06.198 20:11:55 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # uname 00:12:06.198 20:11:55 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:06.198 20:11:55 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 82839 00:12:06.456 killing process with pid 82839 00:12:06.456 20:11:55 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:06.456 20:11:55 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:06.456 20:11:55 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@964 -- # echo 'killing process with pid 82839' 00:12:06.456 20:11:55 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@965 -- # kill 82839 00:12:06.456 20:11:55 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@970 -- # wait 82839 00:12:06.456 20:11:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:06.456 20:11:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:06.456 20:11:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:06.456 20:11:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:06.456 20:11:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:06.456 20:11:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:06.456 20:11:55 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:06.456 20:11:55 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:06.714 20:11:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:06.714 ************************************ 00:12:06.714 END TEST nvmf_multitarget 00:12:06.714 ************************************ 00:12:06.714 00:12:06.714 real 0m2.837s 00:12:06.714 user 0m9.303s 00:12:06.714 sys 0m0.703s 00:12:06.714 20:11:55 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:06.714 20:11:55 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:06.714 20:11:55 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:06.714 20:11:55 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:06.714 20:11:55 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:06.714 20:11:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:06.714 ************************************ 00:12:06.714 START TEST nvmf_rpc 00:12:06.714 ************************************ 00:12:06.714 20:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:06.714 * Looking for test storage... 00:12:06.714 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:06.714 20:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:06.714 20:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:06.714 20:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:06.714 20:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:06.714 20:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:06.714 20:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:06.714 20:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:06.714 20:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:06.714 20:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:06.714 20:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:06.714 20:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:06.714 20:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:06.714 20:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:12:06.714 20:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:12:06.714 20:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:06.714 20:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:06.714 20:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:06.714 20:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:06.714 20:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:06.714 20:11:55 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:06.714 20:11:55 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:06.714 20:11:55 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:06.714 20:11:55 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.714 20:11:55 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.714 20:11:55 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.714 20:11:55 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:06.714 20:11:55 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.714 20:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:12:06.714 20:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:06.714 20:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:06.714 20:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:06.714 20:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:06.714 20:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:06.714 20:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:06.714 20:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:06.714 20:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:06.714 20:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:06.714 20:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:06.714 20:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:06.714 20:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:06.714 20:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:06.714 20:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:06.714 20:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:06.714 20:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:06.714 20:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:06.714 20:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:06.714 20:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:06.714 20:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:06.714 20:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:06.714 20:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:06.714 20:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:06.714 20:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:06.714 20:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:06.714 20:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:06.714 20:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:06.714 20:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:06.714 20:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:06.714 20:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:06.714 20:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:06.714 20:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:06.714 20:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:06.714 20:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:06.714 20:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:06.714 20:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:06.714 20:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:06.714 20:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:06.714 Cannot find device "nvmf_tgt_br" 00:12:06.714 20:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@155 -- # true 00:12:06.714 20:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:06.714 Cannot find device "nvmf_tgt_br2" 00:12:06.714 20:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@156 -- # true 00:12:06.714 20:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:06.714 20:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:06.714 Cannot find device "nvmf_tgt_br" 00:12:06.714 20:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@158 -- # true 00:12:06.714 20:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:06.714 Cannot find device "nvmf_tgt_br2" 00:12:06.714 20:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@159 -- # true 00:12:06.714 20:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:06.973 20:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:06.973 20:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:06.973 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:06.973 20:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@162 -- # true 00:12:06.973 20:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:06.973 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:06.973 20:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@163 -- # true 00:12:06.973 20:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:06.973 20:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:06.973 20:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:06.973 20:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:06.973 20:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:06.973 20:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:06.973 20:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:06.973 20:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:06.973 20:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:06.973 20:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:06.973 20:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:06.973 20:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:06.973 20:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:06.973 20:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:06.973 20:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:06.973 20:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:06.973 20:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:06.973 20:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:06.973 20:11:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:06.973 20:11:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:06.973 20:11:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:06.973 20:11:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:06.973 20:11:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:06.973 20:11:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:06.973 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:06.973 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:12:06.973 00:12:06.973 --- 10.0.0.2 ping statistics --- 00:12:06.973 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:06.973 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:12:06.973 20:11:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:06.973 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:06.973 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.105 ms 00:12:06.973 00:12:06.973 --- 10.0.0.3 ping statistics --- 00:12:06.973 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:06.973 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:12:06.973 20:11:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:06.973 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:06.973 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:12:06.973 00:12:06.973 --- 10.0.0.1 ping statistics --- 00:12:06.973 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:06.973 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:12:06.973 20:11:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:06.973 20:11:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@433 -- # return 0 00:12:06.973 20:11:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:06.973 20:11:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:06.973 20:11:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:06.973 20:11:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:06.973 20:11:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:06.973 20:11:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:06.973 20:11:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:07.231 20:11:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:07.231 20:11:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:07.231 20:11:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:07.231 20:11:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.231 20:11:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=83068 00:12:07.231 20:11:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 83068 00:12:07.231 20:11:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:07.231 20:11:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@827 -- # '[' -z 83068 ']' 00:12:07.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:07.231 20:11:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:07.231 20:11:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:07.231 20:11:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:07.231 20:11:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:07.231 20:11:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.231 [2024-07-14 20:11:56.130496] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:12:07.231 [2024-07-14 20:11:56.130847] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:07.231 [2024-07-14 20:11:56.265981] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:07.490 [2024-07-14 20:11:56.365234] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:07.490 [2024-07-14 20:11:56.365560] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:07.490 [2024-07-14 20:11:56.365725] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:07.490 [2024-07-14 20:11:56.365779] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:07.490 [2024-07-14 20:11:56.365904] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:07.490 [2024-07-14 20:11:56.366039] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:07.490 [2024-07-14 20:11:56.366507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:07.490 [2024-07-14 20:11:56.366610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:07.490 [2024-07-14 20:11:56.366708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:08.058 20:11:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:08.058 20:11:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@860 -- # return 0 00:12:08.058 20:11:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:08.058 20:11:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:08.058 20:11:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:08.317 20:11:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:08.317 20:11:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:08.317 20:11:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.317 20:11:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:08.317 20:11:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.317 20:11:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:08.317 "poll_groups": [ 00:12:08.317 { 00:12:08.317 "admin_qpairs": 0, 00:12:08.317 "completed_nvme_io": 0, 00:12:08.317 "current_admin_qpairs": 0, 00:12:08.317 "current_io_qpairs": 0, 00:12:08.317 "io_qpairs": 0, 00:12:08.317 "name": "nvmf_tgt_poll_group_000", 00:12:08.317 "pending_bdev_io": 0, 00:12:08.317 "transports": [] 00:12:08.317 }, 00:12:08.317 { 00:12:08.317 "admin_qpairs": 0, 00:12:08.317 "completed_nvme_io": 0, 00:12:08.317 "current_admin_qpairs": 0, 00:12:08.317 "current_io_qpairs": 0, 00:12:08.317 "io_qpairs": 0, 00:12:08.317 "name": "nvmf_tgt_poll_group_001", 00:12:08.317 "pending_bdev_io": 0, 00:12:08.317 "transports": [] 00:12:08.317 }, 00:12:08.317 { 00:12:08.317 "admin_qpairs": 0, 00:12:08.317 "completed_nvme_io": 0, 00:12:08.317 "current_admin_qpairs": 0, 00:12:08.317 "current_io_qpairs": 0, 00:12:08.317 "io_qpairs": 0, 00:12:08.317 "name": "nvmf_tgt_poll_group_002", 00:12:08.317 "pending_bdev_io": 0, 00:12:08.317 "transports": [] 00:12:08.317 }, 00:12:08.317 { 00:12:08.317 "admin_qpairs": 0, 00:12:08.317 "completed_nvme_io": 0, 00:12:08.317 "current_admin_qpairs": 0, 00:12:08.317 "current_io_qpairs": 0, 00:12:08.317 "io_qpairs": 0, 00:12:08.317 "name": "nvmf_tgt_poll_group_003", 00:12:08.317 "pending_bdev_io": 0, 00:12:08.317 "transports": [] 00:12:08.317 } 00:12:08.317 ], 00:12:08.317 "tick_rate": 2200000000 00:12:08.317 }' 00:12:08.317 20:11:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:08.317 20:11:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:08.317 20:11:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:08.317 20:11:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:08.317 20:11:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:08.317 20:11:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:08.317 20:11:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:08.317 20:11:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:08.317 20:11:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.317 20:11:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:08.317 [2024-07-14 20:11:57.288460] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:08.317 20:11:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.317 20:11:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:08.317 20:11:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.317 20:11:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:08.317 20:11:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.317 20:11:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:08.317 "poll_groups": [ 00:12:08.317 { 00:12:08.317 "admin_qpairs": 0, 00:12:08.317 "completed_nvme_io": 0, 00:12:08.317 "current_admin_qpairs": 0, 00:12:08.317 "current_io_qpairs": 0, 00:12:08.317 "io_qpairs": 0, 00:12:08.317 "name": "nvmf_tgt_poll_group_000", 00:12:08.317 "pending_bdev_io": 0, 00:12:08.317 "transports": [ 00:12:08.317 { 00:12:08.317 "trtype": "TCP" 00:12:08.317 } 00:12:08.317 ] 00:12:08.317 }, 00:12:08.317 { 00:12:08.317 "admin_qpairs": 0, 00:12:08.317 "completed_nvme_io": 0, 00:12:08.317 "current_admin_qpairs": 0, 00:12:08.317 "current_io_qpairs": 0, 00:12:08.317 "io_qpairs": 0, 00:12:08.317 "name": "nvmf_tgt_poll_group_001", 00:12:08.317 "pending_bdev_io": 0, 00:12:08.317 "transports": [ 00:12:08.317 { 00:12:08.317 "trtype": "TCP" 00:12:08.317 } 00:12:08.317 ] 00:12:08.317 }, 00:12:08.317 { 00:12:08.317 "admin_qpairs": 0, 00:12:08.317 "completed_nvme_io": 0, 00:12:08.317 "current_admin_qpairs": 0, 00:12:08.317 "current_io_qpairs": 0, 00:12:08.317 "io_qpairs": 0, 00:12:08.317 "name": "nvmf_tgt_poll_group_002", 00:12:08.317 "pending_bdev_io": 0, 00:12:08.317 "transports": [ 00:12:08.317 { 00:12:08.317 "trtype": "TCP" 00:12:08.317 } 00:12:08.317 ] 00:12:08.317 }, 00:12:08.317 { 00:12:08.317 "admin_qpairs": 0, 00:12:08.317 "completed_nvme_io": 0, 00:12:08.317 "current_admin_qpairs": 0, 00:12:08.317 "current_io_qpairs": 0, 00:12:08.317 "io_qpairs": 0, 00:12:08.317 "name": "nvmf_tgt_poll_group_003", 00:12:08.317 "pending_bdev_io": 0, 00:12:08.317 "transports": [ 00:12:08.317 { 00:12:08.317 "trtype": "TCP" 00:12:08.317 } 00:12:08.317 ] 00:12:08.317 } 00:12:08.317 ], 00:12:08.317 "tick_rate": 2200000000 00:12:08.317 }' 00:12:08.317 20:11:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:08.317 20:11:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:08.317 20:11:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:08.317 20:11:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:08.317 20:11:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:08.317 20:11:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:08.317 20:11:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:08.317 20:11:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:08.317 20:11:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:08.576 20:11:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:08.576 20:11:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:08.576 20:11:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:08.576 20:11:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:08.576 20:11:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:08.576 20:11:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.576 20:11:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:08.576 Malloc1 00:12:08.576 20:11:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.576 20:11:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:08.576 20:11:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.576 20:11:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:08.576 20:11:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.576 20:11:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:08.576 20:11:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.576 20:11:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:08.576 20:11:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.576 20:11:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:08.576 20:11:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.576 20:11:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:08.576 20:11:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.576 20:11:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:08.576 20:11:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.576 20:11:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:08.576 [2024-07-14 20:11:57.474796] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:08.576 20:11:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.576 20:11:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid=caa3dfc1-79db-49e7-95fe-b9f6785698c4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -a 10.0.0.2 -s 4420 00:12:08.576 20:11:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:12:08.576 20:11:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid=caa3dfc1-79db-49e7-95fe-b9f6785698c4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -a 10.0.0.2 -s 4420 00:12:08.576 20:11:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:12:08.576 20:11:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:08.576 20:11:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:12:08.576 20:11:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:08.576 20:11:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:12:08.576 20:11:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:08.576 20:11:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:12:08.576 20:11:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:12:08.576 20:11:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid=caa3dfc1-79db-49e7-95fe-b9f6785698c4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -a 10.0.0.2 -s 4420 00:12:08.576 [2024-07-14 20:11:57.503240] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4' 00:12:08.576 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:08.576 could not add new controller: failed to write to nvme-fabrics device 00:12:08.576 20:11:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:12:08.576 20:11:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:08.576 20:11:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:08.577 20:11:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:08.577 20:11:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:12:08.577 20:11:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.577 20:11:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:08.577 20:11:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.577 20:11:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid=caa3dfc1-79db-49e7-95fe-b9f6785698c4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:08.835 20:11:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:08.835 20:11:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:08.835 20:11:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:08.835 20:11:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:08.835 20:11:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:10.739 20:11:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:10.739 20:11:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:10.739 20:11:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:10.739 20:11:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:10.739 20:11:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:10.739 20:11:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:10.739 20:11:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:10.739 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:10.739 20:11:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:10.739 20:11:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:10.740 20:11:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:10.740 20:11:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:10.740 20:11:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:10.740 20:11:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:10.740 20:11:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:10.740 20:11:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:12:10.740 20:11:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:10.740 20:11:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.740 20:11:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:10.740 20:11:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid=caa3dfc1-79db-49e7-95fe-b9f6785698c4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:10.740 20:11:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:12:10.740 20:11:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid=caa3dfc1-79db-49e7-95fe-b9f6785698c4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:10.740 20:11:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:12:10.740 20:11:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:10.740 20:11:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:12:10.740 20:11:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:10.740 20:11:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:12:10.740 20:11:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:10.740 20:11:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:12:10.740 20:11:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:12:10.740 20:11:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid=caa3dfc1-79db-49e7-95fe-b9f6785698c4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:10.740 [2024-07-14 20:11:59.804569] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4' 00:12:10.740 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:10.740 could not add new controller: failed to write to nvme-fabrics device 00:12:10.740 20:11:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:12:10.740 20:11:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:10.740 20:11:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:10.740 20:11:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:10.740 20:11:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:10.740 20:11:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:10.740 20:11:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.740 20:11:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:10.740 20:11:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid=caa3dfc1-79db-49e7-95fe-b9f6785698c4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:10.998 20:11:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:10.998 20:11:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:10.998 20:11:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:10.998 20:11:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:10.998 20:11:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:12.943 20:12:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:12.943 20:12:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:12.943 20:12:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:12.943 20:12:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:12.943 20:12:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:12.943 20:12:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:12.943 20:12:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:13.200 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:13.200 20:12:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:13.200 20:12:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:13.200 20:12:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:13.200 20:12:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:13.200 20:12:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:13.200 20:12:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:13.200 20:12:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:13.200 20:12:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:13.200 20:12:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.200 20:12:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.200 20:12:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.200 20:12:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:13.200 20:12:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:13.200 20:12:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:13.200 20:12:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.200 20:12:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.200 20:12:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.200 20:12:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:13.200 20:12:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.200 20:12:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.200 [2024-07-14 20:12:02.107822] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:13.200 20:12:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.200 20:12:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:13.200 20:12:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.200 20:12:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.200 20:12:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.200 20:12:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:13.200 20:12:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.200 20:12:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.200 20:12:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.200 20:12:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid=caa3dfc1-79db-49e7-95fe-b9f6785698c4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:13.457 20:12:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:13.457 20:12:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:13.457 20:12:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:13.457 20:12:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:13.457 20:12:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:15.354 20:12:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:15.354 20:12:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:15.354 20:12:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:15.354 20:12:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:15.354 20:12:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:15.354 20:12:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:15.354 20:12:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:15.613 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:15.613 20:12:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:15.613 20:12:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:15.613 20:12:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:15.613 20:12:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:15.613 20:12:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:15.613 20:12:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:15.613 20:12:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:15.613 20:12:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:15.613 20:12:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.613 20:12:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:15.613 20:12:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.613 20:12:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:15.613 20:12:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.613 20:12:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:15.613 20:12:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.613 20:12:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:15.613 20:12:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:15.613 20:12:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.613 20:12:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:15.613 20:12:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.613 20:12:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:15.613 20:12:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.613 20:12:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:15.613 [2024-07-14 20:12:04.532749] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:15.613 20:12:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.613 20:12:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:15.613 20:12:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.613 20:12:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:15.613 20:12:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.613 20:12:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:15.613 20:12:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.613 20:12:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:15.613 20:12:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.613 20:12:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid=caa3dfc1-79db-49e7-95fe-b9f6785698c4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:15.872 20:12:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:15.872 20:12:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:15.872 20:12:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:15.872 20:12:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:15.872 20:12:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:17.773 20:12:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:17.773 20:12:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:17.773 20:12:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:17.773 20:12:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:17.773 20:12:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:17.773 20:12:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:17.773 20:12:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:18.031 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:18.031 20:12:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:18.031 20:12:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:18.031 20:12:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:18.031 20:12:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:18.031 20:12:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:18.031 20:12:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:18.031 20:12:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:18.031 20:12:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:18.031 20:12:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.031 20:12:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.031 20:12:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.031 20:12:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:18.031 20:12:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.031 20:12:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.031 20:12:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.031 20:12:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:18.031 20:12:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:18.031 20:12:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.031 20:12:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.031 20:12:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.031 20:12:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:18.031 20:12:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.031 20:12:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.031 [2024-07-14 20:12:06.945506] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:18.031 20:12:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.031 20:12:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:18.031 20:12:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.031 20:12:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.031 20:12:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.031 20:12:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:18.031 20:12:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.031 20:12:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.031 20:12:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.031 20:12:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid=caa3dfc1-79db-49e7-95fe-b9f6785698c4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:18.289 20:12:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:18.289 20:12:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:18.289 20:12:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:18.289 20:12:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:18.289 20:12:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:20.191 20:12:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:20.191 20:12:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:20.192 20:12:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:20.192 20:12:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:20.192 20:12:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:20.192 20:12:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:20.192 20:12:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:20.192 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:20.192 20:12:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:20.192 20:12:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:20.192 20:12:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:20.192 20:12:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:20.192 20:12:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:20.192 20:12:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:20.192 20:12:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:20.192 20:12:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:20.192 20:12:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.192 20:12:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:20.192 20:12:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.192 20:12:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:20.192 20:12:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.192 20:12:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:20.192 20:12:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.192 20:12:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:20.192 20:12:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:20.192 20:12:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.192 20:12:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:20.192 20:12:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.192 20:12:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:20.192 20:12:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.192 20:12:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:20.192 [2024-07-14 20:12:09.252457] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:20.192 20:12:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.192 20:12:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:20.192 20:12:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.192 20:12:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:20.192 20:12:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.192 20:12:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:20.192 20:12:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.192 20:12:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:20.450 20:12:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.450 20:12:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid=caa3dfc1-79db-49e7-95fe-b9f6785698c4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:20.450 20:12:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:20.450 20:12:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:20.450 20:12:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:20.450 20:12:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:20.450 20:12:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:22.983 20:12:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:22.983 20:12:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:22.983 20:12:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:22.983 20:12:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:22.983 20:12:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:22.983 20:12:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:22.983 20:12:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:22.983 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:22.983 20:12:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:22.983 20:12:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:22.983 20:12:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:22.983 20:12:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:22.983 20:12:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:22.983 20:12:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:22.983 20:12:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:22.983 20:12:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:22.983 20:12:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.983 20:12:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.983 20:12:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.983 20:12:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:22.983 20:12:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.983 20:12:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.983 20:12:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.983 20:12:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:22.983 20:12:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:22.983 20:12:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.983 20:12:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.983 20:12:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.983 20:12:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:22.983 20:12:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.983 20:12:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.983 [2024-07-14 20:12:11.559326] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:22.983 20:12:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.983 20:12:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:22.983 20:12:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.983 20:12:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.983 20:12:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.983 20:12:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:22.983 20:12:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.983 20:12:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.983 20:12:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.983 20:12:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid=caa3dfc1-79db-49e7-95fe-b9f6785698c4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:22.983 20:12:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:22.983 20:12:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:22.983 20:12:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:22.984 20:12:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:22.984 20:12:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:24.885 20:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:24.885 20:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:24.885 20:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:24.885 20:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:24.885 20:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:24.885 20:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:24.885 20:12:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:24.885 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:24.885 20:12:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:24.885 20:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:24.885 20:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:24.885 20:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:24.885 20:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:24.885 20:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:24.885 20:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:24.885 20:12:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:24.885 20:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.885 20:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.885 20:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.885 20:12:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:24.885 20:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.885 20:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.885 20:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.885 20:12:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:12:24.885 20:12:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:24.885 20:12:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:24.885 20:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.885 20:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.885 20:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.885 20:12:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:24.885 20:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.885 20:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.885 [2024-07-14 20:12:13.870118] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:24.885 20:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.885 20:12:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:24.885 20:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.885 20:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.885 20:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.885 20:12:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:24.885 20:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.885 20:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.885 20:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.885 20:12:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:24.885 20:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.885 20:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.885 20:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.885 20:12:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:24.885 20:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.885 20:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.885 20:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.885 20:12:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:24.885 20:12:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:24.885 20:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.885 20:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.885 20:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.885 20:12:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:24.886 20:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.886 20:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.886 [2024-07-14 20:12:13.922128] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:24.886 20:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.886 20:12:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:24.886 20:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.886 20:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.886 20:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.886 20:12:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:24.886 20:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.886 20:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.886 20:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.886 20:12:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:24.886 20:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.886 20:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.886 20:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.886 20:12:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:24.886 20:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.886 20:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.886 20:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.886 20:12:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:24.886 20:12:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:24.886 20:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.886 20:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.886 20:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.886 20:12:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:24.886 20:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.886 20:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.144 [2024-07-14 20:12:13.970224] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:25.144 20:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:25.144 20:12:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:25.144 20:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:25.144 20:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.144 20:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:25.144 20:12:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:25.144 20:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:25.144 20:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.144 20:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:25.144 20:12:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:25.144 20:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:25.144 20:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.144 20:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:25.144 20:12:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:25.144 20:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:25.144 20:12:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.144 20:12:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:25.144 20:12:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:25.144 20:12:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:25.144 20:12:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:25.144 20:12:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.144 20:12:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:25.144 20:12:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:25.144 20:12:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:25.144 20:12:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.144 [2024-07-14 20:12:14.022227] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:25.144 20:12:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:25.144 20:12:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:25.144 20:12:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:25.144 20:12:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.144 20:12:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:25.144 20:12:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:25.144 20:12:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:25.144 20:12:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.144 20:12:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:25.144 20:12:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:25.144 20:12:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:25.144 20:12:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.144 20:12:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:25.144 20:12:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:25.144 20:12:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:25.144 20:12:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.144 20:12:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:25.144 20:12:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:25.144 20:12:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:25.144 20:12:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:25.144 20:12:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.144 20:12:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:25.144 20:12:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:25.144 20:12:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:25.144 20:12:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.144 [2024-07-14 20:12:14.082288] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:25.144 20:12:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:25.144 20:12:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:25.144 20:12:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:25.144 20:12:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.144 20:12:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:25.144 20:12:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:25.144 20:12:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:25.144 20:12:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.144 20:12:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:25.144 20:12:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:25.144 20:12:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:25.144 20:12:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.144 20:12:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:25.144 20:12:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:25.144 20:12:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:25.144 20:12:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.144 20:12:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:25.144 20:12:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:25.144 20:12:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:25.144 20:12:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.144 20:12:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:25.144 20:12:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:12:25.144 "poll_groups": [ 00:12:25.144 { 00:12:25.144 "admin_qpairs": 2, 00:12:25.144 "completed_nvme_io": 68, 00:12:25.144 "current_admin_qpairs": 0, 00:12:25.144 "current_io_qpairs": 0, 00:12:25.144 "io_qpairs": 16, 00:12:25.144 "name": "nvmf_tgt_poll_group_000", 00:12:25.144 "pending_bdev_io": 0, 00:12:25.144 "transports": [ 00:12:25.144 { 00:12:25.144 "trtype": "TCP" 00:12:25.144 } 00:12:25.144 ] 00:12:25.144 }, 00:12:25.144 { 00:12:25.144 "admin_qpairs": 3, 00:12:25.144 "completed_nvme_io": 68, 00:12:25.144 "current_admin_qpairs": 0, 00:12:25.144 "current_io_qpairs": 0, 00:12:25.144 "io_qpairs": 17, 00:12:25.144 "name": "nvmf_tgt_poll_group_001", 00:12:25.144 "pending_bdev_io": 0, 00:12:25.144 "transports": [ 00:12:25.144 { 00:12:25.144 "trtype": "TCP" 00:12:25.144 } 00:12:25.144 ] 00:12:25.144 }, 00:12:25.144 { 00:12:25.144 "admin_qpairs": 1, 00:12:25.144 "completed_nvme_io": 118, 00:12:25.144 "current_admin_qpairs": 0, 00:12:25.144 "current_io_qpairs": 0, 00:12:25.144 "io_qpairs": 19, 00:12:25.144 "name": "nvmf_tgt_poll_group_002", 00:12:25.144 "pending_bdev_io": 0, 00:12:25.144 "transports": [ 00:12:25.144 { 00:12:25.144 "trtype": "TCP" 00:12:25.144 } 00:12:25.144 ] 00:12:25.144 }, 00:12:25.144 { 00:12:25.144 "admin_qpairs": 1, 00:12:25.144 "completed_nvme_io": 166, 00:12:25.144 "current_admin_qpairs": 0, 00:12:25.144 "current_io_qpairs": 0, 00:12:25.144 "io_qpairs": 18, 00:12:25.144 "name": "nvmf_tgt_poll_group_003", 00:12:25.144 "pending_bdev_io": 0, 00:12:25.144 "transports": [ 00:12:25.144 { 00:12:25.144 "trtype": "TCP" 00:12:25.144 } 00:12:25.144 ] 00:12:25.144 } 00:12:25.144 ], 00:12:25.144 "tick_rate": 2200000000 00:12:25.144 }' 00:12:25.144 20:12:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:25.144 20:12:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:25.144 20:12:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:25.144 20:12:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:25.144 20:12:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:25.144 20:12:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:25.144 20:12:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:25.144 20:12:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:25.144 20:12:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:25.402 20:12:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 70 > 0 )) 00:12:25.402 20:12:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:25.402 20:12:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:25.402 20:12:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:12:25.402 20:12:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:25.402 20:12:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:12:25.402 20:12:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:25.402 20:12:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:12:25.402 20:12:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:25.402 20:12:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:25.402 rmmod nvme_tcp 00:12:25.402 rmmod nvme_fabrics 00:12:25.402 rmmod nvme_keyring 00:12:25.402 20:12:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:25.402 20:12:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:12:25.402 20:12:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:12:25.402 20:12:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 83068 ']' 00:12:25.402 20:12:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 83068 00:12:25.402 20:12:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@946 -- # '[' -z 83068 ']' 00:12:25.402 20:12:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@950 -- # kill -0 83068 00:12:25.402 20:12:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # uname 00:12:25.402 20:12:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:25.402 20:12:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 83068 00:12:25.402 20:12:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:25.402 20:12:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:25.402 20:12:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 83068' 00:12:25.402 killing process with pid 83068 00:12:25.402 20:12:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@965 -- # kill 83068 00:12:25.402 20:12:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@970 -- # wait 83068 00:12:25.661 20:12:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:25.661 20:12:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:25.661 20:12:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:25.661 20:12:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:25.661 20:12:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:25.661 20:12:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:25.661 20:12:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:25.661 20:12:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:25.661 20:12:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:25.920 ************************************ 00:12:25.920 END TEST nvmf_rpc 00:12:25.920 ************************************ 00:12:25.920 00:12:25.920 real 0m19.152s 00:12:25.920 user 1m12.334s 00:12:25.920 sys 0m2.290s 00:12:25.920 20:12:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:25.920 20:12:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.920 20:12:14 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:25.920 20:12:14 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:25.920 20:12:14 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:25.920 20:12:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:25.920 ************************************ 00:12:25.920 START TEST nvmf_invalid 00:12:25.920 ************************************ 00:12:25.920 20:12:14 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:25.920 * Looking for test storage... 00:12:25.920 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:25.920 20:12:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:25.920 20:12:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:12:25.920 20:12:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:25.920 20:12:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:25.920 20:12:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:25.920 20:12:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:25.920 20:12:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:25.920 20:12:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:25.920 20:12:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:25.920 20:12:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:25.920 20:12:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:25.920 20:12:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:25.920 20:12:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:12:25.920 20:12:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:12:25.920 20:12:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:25.920 20:12:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:25.920 20:12:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:25.920 20:12:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:25.920 20:12:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:25.920 20:12:14 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:25.920 20:12:14 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:25.920 20:12:14 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:25.920 20:12:14 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.920 20:12:14 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.920 20:12:14 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.920 20:12:14 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:12:25.920 20:12:14 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.920 20:12:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:12:25.920 20:12:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:25.920 20:12:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:25.920 20:12:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:25.920 20:12:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:25.920 20:12:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:25.920 20:12:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:25.920 20:12:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:25.920 20:12:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:25.920 20:12:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:12:25.920 20:12:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:25.920 20:12:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:25.920 20:12:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:12:25.920 20:12:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:12:25.920 20:12:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:12:25.920 20:12:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:25.920 20:12:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:25.920 20:12:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:25.920 20:12:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:25.920 20:12:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:25.920 20:12:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:25.921 20:12:14 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:25.921 20:12:14 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:25.921 20:12:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:25.921 20:12:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:25.921 20:12:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:25.921 20:12:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:25.921 20:12:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:25.921 20:12:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:25.921 20:12:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:25.921 20:12:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:25.921 20:12:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:25.921 20:12:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:25.921 20:12:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:25.921 20:12:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:25.921 20:12:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:25.921 20:12:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:25.921 20:12:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:25.921 20:12:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:25.921 20:12:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:25.921 20:12:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:25.921 20:12:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:25.921 20:12:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:25.921 Cannot find device "nvmf_tgt_br" 00:12:25.921 20:12:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@155 -- # true 00:12:25.921 20:12:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:25.921 Cannot find device "nvmf_tgt_br2" 00:12:25.921 20:12:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@156 -- # true 00:12:25.921 20:12:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:25.921 20:12:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:25.921 Cannot find device "nvmf_tgt_br" 00:12:25.921 20:12:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@158 -- # true 00:12:25.921 20:12:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:25.921 Cannot find device "nvmf_tgt_br2" 00:12:25.921 20:12:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@159 -- # true 00:12:25.921 20:12:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:26.180 20:12:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:26.180 20:12:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:26.180 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:26.180 20:12:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@162 -- # true 00:12:26.180 20:12:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:26.180 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:26.180 20:12:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@163 -- # true 00:12:26.180 20:12:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:26.180 20:12:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:26.180 20:12:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:26.180 20:12:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:26.180 20:12:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:26.180 20:12:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:26.180 20:12:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:26.180 20:12:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:26.180 20:12:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:26.180 20:12:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:26.180 20:12:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:26.180 20:12:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:26.180 20:12:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:26.180 20:12:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:26.180 20:12:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:26.180 20:12:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:26.180 20:12:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:26.180 20:12:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:26.180 20:12:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:26.180 20:12:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:26.180 20:12:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:26.180 20:12:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:26.180 20:12:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:26.180 20:12:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:26.180 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:26.180 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:12:26.180 00:12:26.180 --- 10.0.0.2 ping statistics --- 00:12:26.180 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:26.180 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:12:26.180 20:12:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:26.180 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:26.180 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:12:26.180 00:12:26.180 --- 10.0.0.3 ping statistics --- 00:12:26.180 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:26.180 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:12:26.180 20:12:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:26.439 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:26.439 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:12:26.439 00:12:26.439 --- 10.0.0.1 ping statistics --- 00:12:26.439 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:26.439 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:12:26.439 20:12:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:26.439 20:12:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@433 -- # return 0 00:12:26.439 20:12:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:26.439 20:12:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:26.439 20:12:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:26.439 20:12:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:26.439 20:12:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:26.439 20:12:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:26.439 20:12:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:26.439 20:12:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:26.439 20:12:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:26.439 20:12:15 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:26.439 20:12:15 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:26.439 20:12:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=83585 00:12:26.439 20:12:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 83585 00:12:26.439 20:12:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:26.439 20:12:15 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@827 -- # '[' -z 83585 ']' 00:12:26.439 20:12:15 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:26.439 20:12:15 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:26.439 20:12:15 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:26.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:26.439 20:12:15 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:26.439 20:12:15 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:26.439 [2024-07-14 20:12:15.356399] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:12:26.439 [2024-07-14 20:12:15.356535] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:26.439 [2024-07-14 20:12:15.500767] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:26.697 [2024-07-14 20:12:15.598756] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:26.697 [2024-07-14 20:12:15.598843] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:26.697 [2024-07-14 20:12:15.598894] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:26.697 [2024-07-14 20:12:15.598907] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:26.697 [2024-07-14 20:12:15.598917] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:26.697 [2024-07-14 20:12:15.599042] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:26.697 [2024-07-14 20:12:15.599149] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:26.697 [2024-07-14 20:12:15.599882] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:26.697 [2024-07-14 20:12:15.599916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:27.631 20:12:16 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:27.631 20:12:16 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@860 -- # return 0 00:12:27.631 20:12:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:27.631 20:12:16 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:27.631 20:12:16 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:27.631 20:12:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:27.631 20:12:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:27.631 20:12:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode30482 00:12:27.631 [2024-07-14 20:12:16.661984] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:27.631 20:12:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='2024/07/14 20:12:16 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode30482 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:12:27.631 request: 00:12:27.631 { 00:12:27.631 "method": "nvmf_create_subsystem", 00:12:27.631 "params": { 00:12:27.631 "nqn": "nqn.2016-06.io.spdk:cnode30482", 00:12:27.631 "tgt_name": "foobar" 00:12:27.631 } 00:12:27.631 } 00:12:27.631 Got JSON-RPC error response 00:12:27.631 GoRPCClient: error on JSON-RPC call' 00:12:27.631 20:12:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ 2024/07/14 20:12:16 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode30482 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:12:27.631 request: 00:12:27.631 { 00:12:27.631 "method": "nvmf_create_subsystem", 00:12:27.631 "params": { 00:12:27.631 "nqn": "nqn.2016-06.io.spdk:cnode30482", 00:12:27.631 "tgt_name": "foobar" 00:12:27.631 } 00:12:27.631 } 00:12:27.631 Got JSON-RPC error response 00:12:27.631 GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:27.631 20:12:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:27.631 20:12:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode13999 00:12:27.890 [2024-07-14 20:12:16.954406] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13999: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:28.149 20:12:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='2024/07/14 20:12:16 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode13999 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:12:28.149 request: 00:12:28.149 { 00:12:28.149 "method": "nvmf_create_subsystem", 00:12:28.149 "params": { 00:12:28.149 "nqn": "nqn.2016-06.io.spdk:cnode13999", 00:12:28.149 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:12:28.149 } 00:12:28.149 } 00:12:28.149 Got JSON-RPC error response 00:12:28.149 GoRPCClient: error on JSON-RPC call' 00:12:28.149 20:12:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ 2024/07/14 20:12:16 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode13999 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:12:28.149 request: 00:12:28.149 { 00:12:28.149 "method": "nvmf_create_subsystem", 00:12:28.149 "params": { 00:12:28.149 "nqn": "nqn.2016-06.io.spdk:cnode13999", 00:12:28.149 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:12:28.149 } 00:12:28.149 } 00:12:28.149 Got JSON-RPC error response 00:12:28.149 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:28.149 20:12:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:28.149 20:12:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode4789 00:12:28.149 [2024-07-14 20:12:17.186698] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4789: invalid model number 'SPDK_Controller' 00:12:28.149 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='2024/07/14 20:12:17 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode4789], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:12:28.149 request: 00:12:28.149 { 00:12:28.149 "method": "nvmf_create_subsystem", 00:12:28.149 "params": { 00:12:28.149 "nqn": "nqn.2016-06.io.spdk:cnode4789", 00:12:28.149 "model_number": "SPDK_Controller\u001f" 00:12:28.149 } 00:12:28.149 } 00:12:28.149 Got JSON-RPC error response 00:12:28.149 GoRPCClient: error on JSON-RPC call' 00:12:28.149 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ 2024/07/14 20:12:17 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode4789], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:12:28.149 request: 00:12:28.149 { 00:12:28.149 "method": "nvmf_create_subsystem", 00:12:28.149 "params": { 00:12:28.149 "nqn": "nqn.2016-06.io.spdk:cnode4789", 00:12:28.149 "model_number": "SPDK_Controller\u001f" 00:12:28.149 } 00:12:28.149 } 00:12:28.149 Got JSON-RPC error response 00:12:28.149 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:28.149 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:12:28.149 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:12:28.149 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:28.149 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:28.149 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:28.149 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:28.149 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.149 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:12:28.149 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:12:28.149 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:12:28.149 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.149 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.149 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:12:28.149 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:12:28.149 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:12:28.149 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.149 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.149 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:12:28.149 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:12:28.149 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:12:28.149 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.149 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.409 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:12:28.409 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:12:28.409 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:12:28.409 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.409 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.409 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:12:28.409 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:12:28.409 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:12:28.409 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.409 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.409 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:12:28.409 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:12:28.409 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:12:28.409 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.409 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.409 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:12:28.409 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:12:28.409 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:12:28.409 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.409 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.409 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:12:28.409 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:12:28.409 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:12:28.409 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.409 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.409 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:12:28.409 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:12:28.409 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:12:28.409 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.409 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.409 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:12:28.409 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:12:28.409 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:12:28.409 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.409 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.409 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:12:28.409 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:12:28.409 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:12:28.409 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.409 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.409 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:12:28.409 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:12:28.409 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:12:28.409 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.409 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.409 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:12:28.409 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:12:28.409 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:12:28.409 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.409 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.409 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:12:28.409 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:12:28.409 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:12:28.409 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.409 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.409 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:12:28.409 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:12:28.409 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:12:28.409 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.409 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.409 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:12:28.409 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:12:28.409 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:12:28.409 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.409 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.409 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:12:28.409 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:12:28.409 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:12:28.409 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.409 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.409 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:12:28.409 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:12:28.409 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:12:28.409 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.409 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.409 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:12:28.409 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:12:28.409 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:12:28.409 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.409 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.409 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:12:28.409 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:12:28.409 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:12:28.409 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.409 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.409 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:12:28.409 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:12:28.409 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:12:28.409 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.409 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.409 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ a == \- ]] 00:12:28.409 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'aDC64z>uLT-Z8Rs&PC!' 00:12:28.409 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s 'aDC64z>uLT-Z8Rs&PC!' nqn.2016-06.io.spdk:cnode9457 00:12:28.669 [2024-07-14 20:12:17.591243] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9457: invalid serial number 'aDC64z>uLT-Z8Rs&PC!' 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='2024/07/14 20:12:17 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode9457 serial_number:aDC64z>uLT-Z8Rs&PC!], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN aDC64z>uLT-Z8Rs&PC! 00:12:28.670 request: 00:12:28.670 { 00:12:28.670 "method": "nvmf_create_subsystem", 00:12:28.670 "params": { 00:12:28.670 "nqn": "nqn.2016-06.io.spdk:cnode9457", 00:12:28.670 "serial_number": "aDC64z>uLT-Z8Rs&\u007fP\u007fC!" 00:12:28.670 } 00:12:28.670 } 00:12:28.670 Got JSON-RPC error response 00:12:28.670 GoRPCClient: error on JSON-RPC call' 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ 2024/07/14 20:12:17 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode9457 serial_number:aDC64z>uLT-Z8Rs&PC!], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN aDC64z>uLT-Z8Rs&PC! 00:12:28.670 request: 00:12:28.670 { 00:12:28.670 "method": "nvmf_create_subsystem", 00:12:28.670 "params": { 00:12:28.670 "nqn": "nqn.2016-06.io.spdk:cnode9457", 00:12:28.670 "serial_number": "aDC64z>uLT-Z8Rs&\u007fP\u007fC!" 00:12:28.670 } 00:12:28.670 } 00:12:28.670 Got JSON-RPC error response 00:12:28.670 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.670 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:12:28.671 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:12:28.671 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:12:28.671 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.671 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.671 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:12:28.671 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:12:28.671 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:12:28.671 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.671 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.671 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:12:28.671 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:12:28.671 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:12:28.671 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.671 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.671 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:12:28.671 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:12:28.671 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:12:28.671 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.671 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.930 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:12:28.930 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:12:28.930 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:12:28.930 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.930 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.930 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:12:28.930 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:12:28.930 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:12:28.930 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.930 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.930 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:12:28.930 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:12:28.930 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:12:28.930 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.930 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.930 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:12:28.930 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:12:28.930 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:12:28.930 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.930 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.930 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:12:28.930 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:12:28.930 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:12:28.930 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.930 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.930 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:12:28.930 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:12:28.930 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:12:28.930 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.930 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.930 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:12:28.930 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:12:28.930 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:12:28.930 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.930 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.930 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:12:28.930 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:12:28.930 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:12:28.930 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.930 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.930 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:12:28.930 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:12:28.930 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:12:28.930 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.930 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.930 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:12:28.930 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:12:28.930 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:12:28.930 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.930 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.930 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:12:28.930 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:12:28.930 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:12:28.930 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.930 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.930 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:12:28.930 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:12:28.930 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:12:28.930 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.930 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.930 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:12:28.930 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:12:28.930 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:12:28.930 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.930 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.930 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ m == \- ]] 00:12:28.930 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'm^Ph%Bj:m;k-~H(m 0I|.wbY9BK+A9:2b@jN`VPR' 00:12:28.930 20:12:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d 'm^Ph%Bj:m;k-~H(m 0I|.wbY9BK+A9:2b@jN`VPR' nqn.2016-06.io.spdk:cnode8234 00:12:29.189 [2024-07-14 20:12:18.083942] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8234: invalid model number 'm^Ph%Bj:m;k-~H(m 0I|.wbY9BK+A9:2b@jN`VPR' 00:12:29.189 20:12:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='2024/07/14 20:12:18 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:m^Ph%Bj:m;k-~H(m 0I|.wbY9BK+A9:2b@jN`VPR nqn:nqn.2016-06.io.spdk:cnode8234], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN m^Ph%Bj:m;k-~H(m 0I|.wbY9BK+A9:2b@jN`VPR 00:12:29.189 request: 00:12:29.189 { 00:12:29.189 "method": "nvmf_create_subsystem", 00:12:29.189 "params": { 00:12:29.189 "nqn": "nqn.2016-06.io.spdk:cnode8234", 00:12:29.189 "model_number": "m^Ph\u007f%Bj:m;k-~H(m 0I|.wbY9BK+A9:2b@jN`VPR" 00:12:29.189 } 00:12:29.189 } 00:12:29.189 Got JSON-RPC error response 00:12:29.189 GoRPCClient: error on JSON-RPC call' 00:12:29.189 20:12:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ 2024/07/14 20:12:18 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:m^Ph%Bj:m;k-~H(m 0I|.wbY9BK+A9:2b@jN`VPR nqn:nqn.2016-06.io.spdk:cnode8234], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN m^Ph%Bj:m;k-~H(m 0I|.wbY9BK+A9:2b@jN`VPR 00:12:29.189 request: 00:12:29.189 { 00:12:29.189 "method": "nvmf_create_subsystem", 00:12:29.189 "params": { 00:12:29.189 "nqn": "nqn.2016-06.io.spdk:cnode8234", 00:12:29.189 "model_number": "m^Ph\u007f%Bj:m;k-~H(m 0I|.wbY9BK+A9:2b@jN`VPR" 00:12:29.189 } 00:12:29.189 } 00:12:29.189 Got JSON-RPC error response 00:12:29.189 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:29.189 20:12:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:12:29.446 [2024-07-14 20:12:18.380390] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:29.446 20:12:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:12:29.704 20:12:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:12:29.704 20:12:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:12:29.704 20:12:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:12:29.704 20:12:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:12:29.704 20:12:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:12:29.962 [2024-07-14 20:12:18.970185] nvmf_rpc.c: 804:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:12:29.962 20:12:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='2024/07/14 20:12:18 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:12:29.962 request: 00:12:29.962 { 00:12:29.962 "method": "nvmf_subsystem_remove_listener", 00:12:29.962 "params": { 00:12:29.962 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:29.962 "listen_address": { 00:12:29.962 "trtype": "tcp", 00:12:29.962 "traddr": "", 00:12:29.962 "trsvcid": "4421" 00:12:29.962 } 00:12:29.962 } 00:12:29.962 } 00:12:29.962 Got JSON-RPC error response 00:12:29.962 GoRPCClient: error on JSON-RPC call' 00:12:29.962 20:12:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ 2024/07/14 20:12:18 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:12:29.962 request: 00:12:29.962 { 00:12:29.962 "method": "nvmf_subsystem_remove_listener", 00:12:29.962 "params": { 00:12:29.962 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:29.962 "listen_address": { 00:12:29.962 "trtype": "tcp", 00:12:29.962 "traddr": "", 00:12:29.962 "trsvcid": "4421" 00:12:29.962 } 00:12:29.962 } 00:12:29.962 } 00:12:29.962 Got JSON-RPC error response 00:12:29.962 GoRPCClient: error on JSON-RPC call != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:12:29.962 20:12:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10099 -i 0 00:12:30.220 [2024-07-14 20:12:19.262636] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10099: invalid cntlid range [0-65519] 00:12:30.221 20:12:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='2024/07/14 20:12:19 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode10099], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:12:30.221 request: 00:12:30.221 { 00:12:30.221 "method": "nvmf_create_subsystem", 00:12:30.221 "params": { 00:12:30.221 "nqn": "nqn.2016-06.io.spdk:cnode10099", 00:12:30.221 "min_cntlid": 0 00:12:30.221 } 00:12:30.221 } 00:12:30.221 Got JSON-RPC error response 00:12:30.221 GoRPCClient: error on JSON-RPC call' 00:12:30.221 20:12:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ 2024/07/14 20:12:19 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode10099], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:12:30.221 request: 00:12:30.221 { 00:12:30.221 "method": "nvmf_create_subsystem", 00:12:30.221 "params": { 00:12:30.221 "nqn": "nqn.2016-06.io.spdk:cnode10099", 00:12:30.221 "min_cntlid": 0 00:12:30.221 } 00:12:30.221 } 00:12:30.221 Got JSON-RPC error response 00:12:30.221 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:30.221 20:12:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode25056 -i 65520 00:12:30.479 [2024-07-14 20:12:19.555107] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25056: invalid cntlid range [65520-65519] 00:12:30.737 20:12:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='2024/07/14 20:12:19 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode25056], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:12:30.737 request: 00:12:30.737 { 00:12:30.737 "method": "nvmf_create_subsystem", 00:12:30.737 "params": { 00:12:30.737 "nqn": "nqn.2016-06.io.spdk:cnode25056", 00:12:30.737 "min_cntlid": 65520 00:12:30.737 } 00:12:30.737 } 00:12:30.737 Got JSON-RPC error response 00:12:30.737 GoRPCClient: error on JSON-RPC call' 00:12:30.737 20:12:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ 2024/07/14 20:12:19 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode25056], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:12:30.737 request: 00:12:30.737 { 00:12:30.737 "method": "nvmf_create_subsystem", 00:12:30.737 "params": { 00:12:30.737 "nqn": "nqn.2016-06.io.spdk:cnode25056", 00:12:30.737 "min_cntlid": 65520 00:12:30.737 } 00:12:30.737 } 00:12:30.737 Got JSON-RPC error response 00:12:30.737 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:30.737 20:12:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode15360 -I 0 00:12:30.995 [2024-07-14 20:12:19.823517] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15360: invalid cntlid range [1-0] 00:12:30.995 20:12:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='2024/07/14 20:12:19 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode15360], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:12:30.995 request: 00:12:30.995 { 00:12:30.995 "method": "nvmf_create_subsystem", 00:12:30.995 "params": { 00:12:30.995 "nqn": "nqn.2016-06.io.spdk:cnode15360", 00:12:30.995 "max_cntlid": 0 00:12:30.995 } 00:12:30.995 } 00:12:30.995 Got JSON-RPC error response 00:12:30.995 GoRPCClient: error on JSON-RPC call' 00:12:30.995 20:12:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ 2024/07/14 20:12:19 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode15360], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:12:30.995 request: 00:12:30.995 { 00:12:30.995 "method": "nvmf_create_subsystem", 00:12:30.995 "params": { 00:12:30.995 "nqn": "nqn.2016-06.io.spdk:cnode15360", 00:12:30.995 "max_cntlid": 0 00:12:30.995 } 00:12:30.995 } 00:12:30.995 Got JSON-RPC error response 00:12:30.995 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:30.995 20:12:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode28513 -I 65520 00:12:31.253 [2024-07-14 20:12:20.111990] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28513: invalid cntlid range [1-65520] 00:12:31.253 20:12:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='2024/07/14 20:12:20 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode28513], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:12:31.253 request: 00:12:31.253 { 00:12:31.253 "method": "nvmf_create_subsystem", 00:12:31.253 "params": { 00:12:31.253 "nqn": "nqn.2016-06.io.spdk:cnode28513", 00:12:31.253 "max_cntlid": 65520 00:12:31.253 } 00:12:31.253 } 00:12:31.253 Got JSON-RPC error response 00:12:31.253 GoRPCClient: error on JSON-RPC call' 00:12:31.253 20:12:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ 2024/07/14 20:12:20 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode28513], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:12:31.253 request: 00:12:31.253 { 00:12:31.253 "method": "nvmf_create_subsystem", 00:12:31.253 "params": { 00:12:31.253 "nqn": "nqn.2016-06.io.spdk:cnode28513", 00:12:31.253 "max_cntlid": 65520 00:12:31.253 } 00:12:31.253 } 00:12:31.253 Got JSON-RPC error response 00:12:31.253 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:31.253 20:12:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode14800 -i 6 -I 5 00:12:31.512 [2024-07-14 20:12:20.396391] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14800: invalid cntlid range [6-5] 00:12:31.512 20:12:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='2024/07/14 20:12:20 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode14800], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:12:31.512 request: 00:12:31.512 { 00:12:31.513 "method": "nvmf_create_subsystem", 00:12:31.513 "params": { 00:12:31.513 "nqn": "nqn.2016-06.io.spdk:cnode14800", 00:12:31.513 "min_cntlid": 6, 00:12:31.513 "max_cntlid": 5 00:12:31.513 } 00:12:31.513 } 00:12:31.513 Got JSON-RPC error response 00:12:31.513 GoRPCClient: error on JSON-RPC call' 00:12:31.513 20:12:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ 2024/07/14 20:12:20 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode14800], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:12:31.513 request: 00:12:31.513 { 00:12:31.513 "method": "nvmf_create_subsystem", 00:12:31.513 "params": { 00:12:31.513 "nqn": "nqn.2016-06.io.spdk:cnode14800", 00:12:31.513 "min_cntlid": 6, 00:12:31.513 "max_cntlid": 5 00:12:31.513 } 00:12:31.513 } 00:12:31.513 Got JSON-RPC error response 00:12:31.513 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:31.513 20:12:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:12:31.513 20:12:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:12:31.513 { 00:12:31.513 "name": "foobar", 00:12:31.513 "method": "nvmf_delete_target", 00:12:31.513 "req_id": 1 00:12:31.513 } 00:12:31.513 Got JSON-RPC error response 00:12:31.513 response: 00:12:31.513 { 00:12:31.513 "code": -32602, 00:12:31.513 "message": "The specified target doesn'\''t exist, cannot delete it." 00:12:31.513 }' 00:12:31.513 20:12:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:12:31.513 { 00:12:31.513 "name": "foobar", 00:12:31.513 "method": "nvmf_delete_target", 00:12:31.513 "req_id": 1 00:12:31.513 } 00:12:31.513 Got JSON-RPC error response 00:12:31.513 response: 00:12:31.513 { 00:12:31.513 "code": -32602, 00:12:31.513 "message": "The specified target doesn't exist, cannot delete it." 00:12:31.513 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:12:31.513 20:12:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:12:31.513 20:12:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:12:31.513 20:12:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:31.513 20:12:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:12:31.513 20:12:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:31.513 20:12:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:12:31.513 20:12:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:31.513 20:12:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:31.513 rmmod nvme_tcp 00:12:31.772 rmmod nvme_fabrics 00:12:31.772 rmmod nvme_keyring 00:12:31.772 20:12:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:31.772 20:12:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:12:31.772 20:12:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:12:31.772 20:12:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 83585 ']' 00:12:31.772 20:12:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 83585 00:12:31.772 20:12:20 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@946 -- # '[' -z 83585 ']' 00:12:31.772 20:12:20 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@950 -- # kill -0 83585 00:12:31.772 20:12:20 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@951 -- # uname 00:12:31.772 20:12:20 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:31.772 20:12:20 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 83585 00:12:31.772 killing process with pid 83585 00:12:31.772 20:12:20 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:31.772 20:12:20 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:31.772 20:12:20 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@964 -- # echo 'killing process with pid 83585' 00:12:31.772 20:12:20 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@965 -- # kill 83585 00:12:31.772 20:12:20 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@970 -- # wait 83585 00:12:32.031 20:12:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:32.031 20:12:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:32.031 20:12:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:32.031 20:12:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:32.031 20:12:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:32.031 20:12:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:32.031 20:12:20 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:32.031 20:12:20 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:32.031 20:12:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:32.031 00:12:32.031 real 0m6.131s 00:12:32.031 user 0m24.499s 00:12:32.031 sys 0m1.448s 00:12:32.031 20:12:20 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:32.031 ************************************ 00:12:32.031 END TEST nvmf_invalid 00:12:32.031 20:12:20 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:32.031 ************************************ 00:12:32.032 20:12:20 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:12:32.032 20:12:20 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:32.032 20:12:20 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:32.032 20:12:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:32.032 ************************************ 00:12:32.032 START TEST nvmf_abort 00:12:32.032 ************************************ 00:12:32.032 20:12:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:12:32.032 * Looking for test storage... 00:12:32.032 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:32.032 20:12:21 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:32.032 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:12:32.032 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:32.032 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:32.032 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:32.032 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:32.032 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:32.032 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:32.032 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:32.032 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:32.032 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:32.032 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:32.032 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:12:32.032 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:12:32.032 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:32.032 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:32.032 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:32.032 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:32.032 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:32.032 20:12:21 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:32.032 20:12:21 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:32.032 20:12:21 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:32.032 20:12:21 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.032 20:12:21 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.032 20:12:21 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.032 20:12:21 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:12:32.032 20:12:21 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.032 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:12:32.032 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:32.032 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:32.032 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:32.032 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:32.032 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:32.032 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:32.032 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:32.032 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:32.032 20:12:21 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:32.032 20:12:21 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:12:32.032 20:12:21 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:12:32.032 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:32.032 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:32.032 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:32.032 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:32.032 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:32.032 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:32.032 20:12:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:32.032 20:12:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:32.032 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:32.032 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:32.032 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:32.032 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:32.032 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:32.032 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:32.032 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:32.032 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:32.032 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:32.032 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:32.032 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:32.032 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:32.032 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:32.032 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:32.032 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:32.032 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:32.032 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:32.032 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:32.032 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:32.292 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:32.292 Cannot find device "nvmf_tgt_br" 00:12:32.292 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@155 -- # true 00:12:32.292 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:32.292 Cannot find device "nvmf_tgt_br2" 00:12:32.292 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@156 -- # true 00:12:32.292 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:32.292 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:32.292 Cannot find device "nvmf_tgt_br" 00:12:32.292 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@158 -- # true 00:12:32.292 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:32.292 Cannot find device "nvmf_tgt_br2" 00:12:32.292 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@159 -- # true 00:12:32.292 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:32.292 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:32.292 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:32.292 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:32.292 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@162 -- # true 00:12:32.292 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:32.292 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:32.292 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@163 -- # true 00:12:32.292 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:32.292 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:32.292 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:32.292 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:32.292 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:32.292 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:32.292 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:32.292 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:32.292 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:32.292 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:32.292 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:32.292 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:32.292 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:32.292 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:32.292 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:32.292 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:32.292 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:32.551 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:32.551 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:32.551 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:32.551 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:32.551 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:32.551 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:32.551 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:32.551 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:32.551 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:12:32.551 00:12:32.551 --- 10.0.0.2 ping statistics --- 00:12:32.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:32.551 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:12:32.551 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:32.551 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:32.551 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:12:32.551 00:12:32.551 --- 10.0.0.3 ping statistics --- 00:12:32.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:32.551 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:12:32.551 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:32.551 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:32.551 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:12:32.551 00:12:32.551 --- 10.0.0.1 ping statistics --- 00:12:32.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:32.551 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:12:32.551 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:32.551 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@433 -- # return 0 00:12:32.551 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:32.551 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:32.551 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:32.551 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:32.551 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:32.551 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:32.551 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:32.551 20:12:21 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:12:32.551 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:32.551 20:12:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:32.551 20:12:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:32.551 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=84093 00:12:32.551 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 84093 00:12:32.551 20:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:32.551 20:12:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@827 -- # '[' -z 84093 ']' 00:12:32.551 20:12:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:32.551 20:12:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:32.551 20:12:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:32.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:32.551 20:12:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:32.551 20:12:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:32.551 [2024-07-14 20:12:21.531495] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:12:32.551 [2024-07-14 20:12:21.531576] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:32.810 [2024-07-14 20:12:21.665290] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:32.810 [2024-07-14 20:12:21.788752] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:32.810 [2024-07-14 20:12:21.789085] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:32.810 [2024-07-14 20:12:21.789265] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:32.810 [2024-07-14 20:12:21.789317] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:32.810 [2024-07-14 20:12:21.789415] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:32.810 [2024-07-14 20:12:21.789625] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:32.810 [2024-07-14 20:12:21.790167] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:32.810 [2024-07-14 20:12:21.790175] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:33.746 20:12:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:33.746 20:12:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@860 -- # return 0 00:12:33.746 20:12:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:33.746 20:12:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:33.746 20:12:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:33.746 20:12:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:33.746 20:12:22 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:12:33.746 20:12:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:33.746 20:12:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:33.746 [2024-07-14 20:12:22.629423] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:33.746 20:12:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:33.746 20:12:22 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:12:33.746 20:12:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:33.746 20:12:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:33.746 Malloc0 00:12:33.746 20:12:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:33.746 20:12:22 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:33.746 20:12:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:33.746 20:12:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:33.746 Delay0 00:12:33.746 20:12:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:33.746 20:12:22 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:33.746 20:12:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:33.746 20:12:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:33.746 20:12:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:33.747 20:12:22 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:12:33.747 20:12:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:33.747 20:12:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:33.747 20:12:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:33.747 20:12:22 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:33.747 20:12:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:33.747 20:12:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:33.747 [2024-07-14 20:12:22.705041] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:33.747 20:12:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:33.747 20:12:22 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:33.747 20:12:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:33.747 20:12:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:33.747 20:12:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:33.747 20:12:22 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:12:34.005 [2024-07-14 20:12:22.885118] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:12:35.904 Initializing NVMe Controllers 00:12:35.904 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:12:35.904 controller IO queue size 128 less than required 00:12:35.904 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:12:35.904 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:12:35.904 Initialization complete. Launching workers. 00:12:35.904 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 33732 00:12:35.904 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 33793, failed to submit 62 00:12:35.904 success 33736, unsuccess 57, failed 0 00:12:35.904 20:12:24 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:35.904 20:12:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.904 20:12:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:35.904 20:12:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.904 20:12:24 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:12:35.904 20:12:24 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:12:35.904 20:12:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:35.904 20:12:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:12:35.904 20:12:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:35.904 20:12:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:12:35.904 20:12:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:35.904 20:12:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:35.904 rmmod nvme_tcp 00:12:35.904 rmmod nvme_fabrics 00:12:36.163 rmmod nvme_keyring 00:12:36.163 20:12:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:36.163 20:12:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:12:36.163 20:12:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:12:36.163 20:12:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 84093 ']' 00:12:36.163 20:12:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 84093 00:12:36.163 20:12:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@946 -- # '[' -z 84093 ']' 00:12:36.163 20:12:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@950 -- # kill -0 84093 00:12:36.163 20:12:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # uname 00:12:36.163 20:12:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:36.163 20:12:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 84093 00:12:36.163 killing process with pid 84093 00:12:36.163 20:12:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:12:36.163 20:12:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:12:36.163 20:12:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 84093' 00:12:36.163 20:12:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@965 -- # kill 84093 00:12:36.163 20:12:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@970 -- # wait 84093 00:12:36.422 20:12:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:36.422 20:12:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:36.422 20:12:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:36.422 20:12:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:36.422 20:12:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:36.422 20:12:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:36.422 20:12:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:36.422 20:12:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:36.422 20:12:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:36.422 00:12:36.422 real 0m4.407s 00:12:36.422 user 0m12.552s 00:12:36.422 sys 0m1.149s 00:12:36.422 20:12:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:36.422 ************************************ 00:12:36.422 END TEST nvmf_abort 00:12:36.422 ************************************ 00:12:36.422 20:12:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:36.422 20:12:25 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:12:36.422 20:12:25 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:36.422 20:12:25 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:36.422 20:12:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:36.422 ************************************ 00:12:36.422 START TEST nvmf_ns_hotplug_stress 00:12:36.422 ************************************ 00:12:36.422 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:12:36.681 * Looking for test storage... 00:12:36.681 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:36.681 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:36.681 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:12:36.681 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:36.681 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:36.681 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:36.681 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:36.681 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:36.681 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:36.681 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:36.681 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:36.681 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:36.681 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:36.681 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:12:36.681 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:12:36.681 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:36.681 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:36.681 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:36.681 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:36.681 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:36.681 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:36.681 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:36.682 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:36.682 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.682 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.682 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.682 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:12:36.682 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.682 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:12:36.682 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:36.682 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:36.682 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:36.682 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:36.682 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:36.682 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:36.682 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:36.682 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:36.682 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:36.682 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:12:36.682 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:36.682 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:36.682 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:36.682 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:36.682 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:36.682 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:36.682 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:36.682 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:36.682 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:36.682 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:36.682 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:36.682 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:36.682 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:36.682 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:36.682 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:36.682 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:36.682 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:36.682 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:36.682 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:36.682 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:36.682 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:36.682 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:36.682 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:36.682 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:36.682 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:36.682 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:36.682 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:36.682 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:36.682 Cannot find device "nvmf_tgt_br" 00:12:36.682 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # true 00:12:36.682 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:36.682 Cannot find device "nvmf_tgt_br2" 00:12:36.682 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # true 00:12:36.682 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:36.682 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:36.682 Cannot find device "nvmf_tgt_br" 00:12:36.682 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # true 00:12:36.682 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:36.682 Cannot find device "nvmf_tgt_br2" 00:12:36.682 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # true 00:12:36.682 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:36.682 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:36.682 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:36.682 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:36.682 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # true 00:12:36.682 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:36.682 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:36.682 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # true 00:12:36.682 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:36.682 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:36.682 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:36.682 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:36.682 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:36.682 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:36.682 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:36.682 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:36.682 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:36.942 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:36.942 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:36.942 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:36.942 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:36.942 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:36.942 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:36.942 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:36.942 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:36.942 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:36.942 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:36.942 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:36.942 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:36.942 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:36.942 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:36.942 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:36.942 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:36.942 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:12:36.942 00:12:36.942 --- 10.0.0.2 ping statistics --- 00:12:36.942 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:36.942 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:12:36.942 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:36.942 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:36.942 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.033 ms 00:12:36.942 00:12:36.942 --- 10.0.0.3 ping statistics --- 00:12:36.942 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:36.942 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:12:36.942 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:36.942 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:36.942 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:12:36.942 00:12:36.942 --- 10.0.0.1 ping statistics --- 00:12:36.942 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:36.942 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:12:36.942 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:36.942 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@433 -- # return 0 00:12:36.942 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:36.942 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:36.942 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:36.942 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:36.942 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:36.942 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:36.942 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:36.942 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:12:36.942 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:36.942 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:36.942 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:12:36.942 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=84362 00:12:36.942 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:36.942 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 84362 00:12:36.942 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@827 -- # '[' -z 84362 ']' 00:12:36.942 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:36.942 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:36.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:36.942 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:36.942 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:36.942 20:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:12:36.942 [2024-07-14 20:12:25.956664] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:12:36.942 [2024-07-14 20:12:25.956760] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:37.202 [2024-07-14 20:12:26.091709] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:37.202 [2024-07-14 20:12:26.205672] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:37.202 [2024-07-14 20:12:26.205755] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:37.202 [2024-07-14 20:12:26.205766] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:37.202 [2024-07-14 20:12:26.205774] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:37.202 [2024-07-14 20:12:26.205780] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:37.202 [2024-07-14 20:12:26.205934] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:37.202 [2024-07-14 20:12:26.207053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:37.202 [2024-07-14 20:12:26.207064] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:38.139 20:12:26 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:38.139 20:12:26 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # return 0 00:12:38.139 20:12:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:38.139 20:12:26 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:38.139 20:12:26 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:12:38.139 20:12:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:38.139 20:12:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:12:38.139 20:12:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:38.139 [2024-07-14 20:12:27.140109] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:38.139 20:12:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:38.398 20:12:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:38.657 [2024-07-14 20:12:27.582271] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:38.657 20:12:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:38.916 20:12:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:12:39.175 Malloc0 00:12:39.175 20:12:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:39.434 Delay0 00:12:39.434 20:12:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:39.693 20:12:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:12:39.951 NULL1 00:12:39.951 20:12:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:40.210 20:12:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=84493 00:12:40.210 20:12:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:12:40.210 20:12:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84493 00:12:40.210 20:12:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:41.602 Read completed with error (sct=0, sc=11) 00:12:41.602 20:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:41.602 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:41.602 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:41.602 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:41.602 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:41.602 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:41.602 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:41.602 20:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:12:41.602 20:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:12:41.866 true 00:12:41.866 20:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84493 00:12:41.866 20:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:42.801 20:12:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:42.801 20:12:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:12:42.801 20:12:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:12:43.060 true 00:12:43.060 20:12:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84493 00:12:43.060 20:12:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:43.319 20:12:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:43.578 20:12:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:12:43.578 20:12:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:12:43.837 true 00:12:43.837 20:12:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84493 00:12:43.837 20:12:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:44.771 20:12:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:44.771 20:12:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:12:44.771 20:12:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:12:45.029 true 00:12:45.286 20:12:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84493 00:12:45.286 20:12:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:45.286 20:12:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:45.544 20:12:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:12:45.544 20:12:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:12:45.801 true 00:12:45.801 20:12:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84493 00:12:45.801 20:12:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:46.737 20:12:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:46.737 20:12:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:12:46.737 20:12:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:12:47.305 true 00:12:47.305 20:12:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84493 00:12:47.305 20:12:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:47.305 20:12:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:47.564 20:12:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:12:47.564 20:12:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:12:47.823 true 00:12:47.823 20:12:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84493 00:12:47.823 20:12:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:48.082 20:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:48.340 20:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:12:48.340 20:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:12:48.598 true 00:12:48.598 20:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84493 00:12:48.598 20:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:49.534 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:49.534 20:12:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:49.793 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:49.793 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:49.793 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:49.793 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:49.793 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:49.793 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:49.793 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:50.051 20:12:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:12:50.052 20:12:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:12:50.310 true 00:12:50.310 20:12:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84493 00:12:50.310 20:12:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:50.878 20:12:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:51.446 20:12:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:12:51.446 20:12:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:12:51.446 true 00:12:51.446 20:12:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84493 00:12:51.446 20:12:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:51.704 20:12:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:51.963 20:12:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:12:51.963 20:12:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:12:52.221 true 00:12:52.221 20:12:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84493 00:12:52.221 20:12:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:53.154 20:12:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:53.154 20:12:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:12:53.154 20:12:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:12:53.412 true 00:12:53.412 20:12:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84493 00:12:53.412 20:12:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:53.669 20:12:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:53.926 20:12:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:12:53.926 20:12:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:12:54.184 true 00:12:54.184 20:12:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84493 00:12:54.184 20:12:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:55.115 20:12:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:55.115 20:12:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:12:55.115 20:12:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:12:55.372 true 00:12:55.372 20:12:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84493 00:12:55.372 20:12:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:55.630 20:12:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:55.888 20:12:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:12:55.888 20:12:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:12:56.146 true 00:12:56.146 20:12:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84493 00:12:56.146 20:12:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:57.079 20:12:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:57.337 20:12:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:12:57.337 20:12:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:12:57.596 true 00:12:57.596 20:12:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84493 00:12:57.596 20:12:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:57.855 20:12:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:57.855 20:12:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:12:57.855 20:12:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:12:58.113 true 00:12:58.113 20:12:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84493 00:12:58.113 20:12:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:59.066 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:59.066 20:12:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:59.325 20:12:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:12:59.325 20:12:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:12:59.325 true 00:12:59.594 20:12:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84493 00:12:59.594 20:12:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:59.594 20:12:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:59.866 20:12:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:12:59.866 20:12:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:13:00.125 true 00:13:00.125 20:12:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84493 00:13:00.125 20:12:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:01.059 20:12:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:01.318 20:12:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:13:01.318 20:12:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:13:01.318 true 00:13:01.318 20:12:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84493 00:13:01.318 20:12:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:01.575 20:12:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:01.833 20:12:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:13:01.833 20:12:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:13:02.091 true 00:13:02.091 20:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84493 00:13:02.091 20:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:03.026 20:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:03.283 20:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:13:03.283 20:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:13:03.542 true 00:13:03.542 20:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84493 00:13:03.542 20:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:03.542 20:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:03.800 20:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:13:03.800 20:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:13:04.059 true 00:13:04.059 20:12:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84493 00:13:04.059 20:12:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:04.994 20:12:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:05.252 20:12:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:13:05.252 20:12:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:13:05.510 true 00:13:05.510 20:12:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84493 00:13:05.510 20:12:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:05.768 20:12:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:06.026 20:12:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:13:06.026 20:12:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:13:06.283 true 00:13:06.283 20:12:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84493 00:13:06.283 20:12:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:07.219 20:12:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:07.219 20:12:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:13:07.219 20:12:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:13:07.478 true 00:13:07.478 20:12:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84493 00:13:07.478 20:12:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:07.737 20:12:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:07.996 20:12:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:13:07.996 20:12:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:13:08.255 true 00:13:08.255 20:12:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84493 00:13:08.255 20:12:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:09.191 20:12:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:09.450 20:12:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:13:09.450 20:12:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:13:09.450 true 00:13:09.450 20:12:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84493 00:13:09.450 20:12:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:09.709 20:12:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:09.968 20:12:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:13:09.968 20:12:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:13:10.227 true 00:13:10.227 20:12:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84493 00:13:10.227 20:12:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:11.164 Initializing NVMe Controllers 00:13:11.164 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:11.164 Controller IO queue size 128, less than required. 00:13:11.164 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:11.164 Controller IO queue size 128, less than required. 00:13:11.164 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:11.164 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:11.164 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:11.164 Initialization complete. Launching workers. 00:13:11.164 ======================================================== 00:13:11.164 Latency(us) 00:13:11.164 Device Information : IOPS MiB/s Average min max 00:13:11.164 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 585.96 0.29 126673.90 3082.78 1158162.79 00:13:11.164 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 12852.73 6.28 9959.26 3393.71 519611.95 00:13:11.164 ======================================================== 00:13:11.164 Total : 13438.69 6.56 15048.31 3082.78 1158162.79 00:13:11.164 00:13:11.164 20:13:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:11.164 20:13:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:13:11.165 20:13:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:13:11.423 true 00:13:11.423 20:13:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84493 00:13:11.423 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (84493) - No such process 00:13:11.423 20:13:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 84493 00:13:11.423 20:13:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:11.682 20:13:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:11.941 20:13:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:13:11.941 20:13:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:13:11.941 20:13:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:13:11.941 20:13:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:11.941 20:13:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:13:12.200 null0 00:13:12.200 20:13:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:12.200 20:13:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:12.200 20:13:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:13:12.200 null1 00:13:12.459 20:13:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:12.459 20:13:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:12.459 20:13:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:13:12.718 null2 00:13:12.718 20:13:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:12.718 20:13:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:12.718 20:13:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:13:12.718 null3 00:13:12.718 20:13:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:12.718 20:13:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:12.718 20:13:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:13:12.976 null4 00:13:12.976 20:13:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:12.976 20:13:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:12.976 20:13:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:13:13.235 null5 00:13:13.235 20:13:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:13.235 20:13:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:13.235 20:13:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:13:13.493 null6 00:13:13.493 20:13:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:13.493 20:13:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:13.494 20:13:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:13:13.753 null7 00:13:13.753 20:13:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:13.753 20:13:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:13.753 20:13:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:13:13.753 20:13:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:13.753 20:13:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:13.753 20:13:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:13:13.753 20:13:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:13.753 20:13:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:13.753 20:13:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:13:13.753 20:13:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:13.753 20:13:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.753 20:13:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:13.753 20:13:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:13.753 20:13:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:13:13.753 20:13:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:13:13.753 20:13:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:13.753 20:13:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:13.753 20:13:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.753 20:13:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:13.753 20:13:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:13.753 20:13:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:13.753 20:13:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:13:13.753 20:13:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:13.753 20:13:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:13:13.753 20:13:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:13.753 20:13:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:13.753 20:13:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.753 20:13:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:13.753 20:13:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:13.753 20:13:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:13.753 20:13:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:13:13.753 20:13:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:13.753 20:13:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:13:13.753 20:13:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:13.753 20:13:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.753 20:13:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:13.753 20:13:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:13.753 20:13:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:13.753 20:13:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:13.753 20:13:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:13:13.753 20:13:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:13:13.753 20:13:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:13.753 20:13:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:13.753 20:13:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.753 20:13:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:13.753 20:13:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:13.753 20:13:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:13.753 20:13:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:13:13.753 20:13:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:13:13.753 20:13:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:13.753 20:13:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.753 20:13:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:13.753 20:13:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:13.753 20:13:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:13:13.753 20:13:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:13:13.753 20:13:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:13.753 20:13:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:13.753 20:13:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:13.753 20:13:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:13.753 20:13:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:13.753 20:13:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:13.753 20:13:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 85545 85547 85549 85550 85552 85554 85557 85558 00:13:13.753 20:13:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.753 20:13:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:13:13.753 20:13:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:13:13.753 20:13:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:13.753 20:13:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.753 20:13:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:13.753 20:13:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:14.013 20:13:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:14.013 20:13:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:14.013 20:13:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:14.013 20:13:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:14.013 20:13:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:14.013 20:13:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:14.276 20:13:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:14.276 20:13:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:14.276 20:13:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.276 20:13:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.276 20:13:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:14.276 20:13:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.276 20:13:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.276 20:13:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:14.276 20:13:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.276 20:13:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.276 20:13:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:14.276 20:13:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.276 20:13:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.276 20:13:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:14.276 20:13:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.276 20:13:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.276 20:13:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:14.276 20:13:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.276 20:13:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.276 20:13:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:14.534 20:13:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.534 20:13:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.534 20:13:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:14.534 20:13:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.534 20:13:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.534 20:13:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:14.534 20:13:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:14.534 20:13:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:14.534 20:13:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:14.534 20:13:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:14.534 20:13:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:14.792 20:13:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:14.792 20:13:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:14.792 20:13:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:14.792 20:13:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.792 20:13:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.792 20:13:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:14.792 20:13:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.792 20:13:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.792 20:13:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:14.792 20:13:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.792 20:13:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.792 20:13:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:14.792 20:13:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.793 20:13:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.793 20:13:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:14.793 20:13:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.793 20:13:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.793 20:13:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:14.793 20:13:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.793 20:13:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.793 20:13:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:15.050 20:13:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.050 20:13:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.050 20:13:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:15.050 20:13:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.050 20:13:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.050 20:13:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:15.050 20:13:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:15.050 20:13:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:15.050 20:13:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:15.050 20:13:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:15.050 20:13:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:15.050 20:13:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:15.308 20:13:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:15.308 20:13:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.308 20:13:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.308 20:13:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:15.308 20:13:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:15.308 20:13:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.308 20:13:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.308 20:13:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:15.308 20:13:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.308 20:13:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.308 20:13:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:15.308 20:13:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.308 20:13:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.308 20:13:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:15.308 20:13:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.308 20:13:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.308 20:13:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:15.566 20:13:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.566 20:13:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.566 20:13:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:15.566 20:13:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:15.566 20:13:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.566 20:13:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.566 20:13:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:15.566 20:13:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.566 20:13:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.566 20:13:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:15.566 20:13:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:15.566 20:13:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:15.566 20:13:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:15.823 20:13:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:15.823 20:13:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:15.823 20:13:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.823 20:13:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.823 20:13:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:15.823 20:13:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.823 20:13:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.823 20:13:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:15.823 20:13:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:15.823 20:13:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.823 20:13:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.823 20:13:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:15.823 20:13:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:16.082 20:13:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:16.082 20:13:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:16.082 20:13:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:16.082 20:13:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:16.082 20:13:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:16.082 20:13:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:16.082 20:13:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:16.082 20:13:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:16.082 20:13:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:16.082 20:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:16.082 20:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:16.082 20:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:16.082 20:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:16.082 20:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:16.082 20:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:16.082 20:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:16.082 20:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:16.082 20:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:16.340 20:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:16.340 20:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:16.340 20:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:16.340 20:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:16.340 20:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:16.340 20:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:16.340 20:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:16.340 20:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:16.340 20:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:16.340 20:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:16.340 20:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:16.340 20:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:16.340 20:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:16.340 20:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:16.340 20:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:16.598 20:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:16.598 20:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:16.598 20:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:16.598 20:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:16.598 20:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:16.598 20:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:16.598 20:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:16.598 20:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:16.598 20:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:16.598 20:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:16.598 20:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:16.598 20:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:16.598 20:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:16.598 20:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:16.598 20:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:16.598 20:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:16.598 20:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:16.857 20:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:16.857 20:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:16.857 20:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:16.857 20:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:16.857 20:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:16.857 20:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:16.857 20:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:16.857 20:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:16.857 20:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:16.857 20:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:16.857 20:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:16.857 20:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:16.857 20:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:17.116 20:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:17.116 20:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:17.116 20:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:17.116 20:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:17.116 20:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:17.116 20:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:17.116 20:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:17.116 20:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:17.116 20:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:17.116 20:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:17.116 20:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:17.116 20:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:17.116 20:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:17.116 20:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:17.116 20:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:17.116 20:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:17.374 20:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:17.374 20:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:17.374 20:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:17.374 20:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:17.374 20:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:17.374 20:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:17.374 20:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:17.374 20:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:17.374 20:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:17.374 20:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:17.374 20:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:17.374 20:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:17.374 20:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:17.374 20:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:17.374 20:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:17.374 20:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:17.633 20:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:17.633 20:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:17.633 20:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:17.633 20:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:17.633 20:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:17.633 20:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:17.633 20:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:17.633 20:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:17.633 20:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:17.633 20:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:17.633 20:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:17.633 20:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:17.633 20:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:17.633 20:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:17.633 20:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:17.633 20:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:17.892 20:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:17.892 20:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:17.892 20:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:17.892 20:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:17.892 20:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:17.892 20:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:17.892 20:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:17.892 20:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:17.892 20:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:17.892 20:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:17.892 20:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:17.892 20:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:17.892 20:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:17.892 20:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:17.892 20:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:17.892 20:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:17.892 20:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:18.151 20:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:18.151 20:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:18.151 20:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:18.151 20:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:18.151 20:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:18.151 20:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:18.151 20:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:18.151 20:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:18.151 20:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:18.151 20:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:18.151 20:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:18.151 20:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:18.151 20:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:18.151 20:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:18.151 20:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:18.151 20:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:18.151 20:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:18.151 20:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:18.409 20:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:18.409 20:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:18.409 20:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:18.409 20:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:18.409 20:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:18.409 20:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:18.409 20:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:18.409 20:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:18.409 20:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:18.409 20:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:18.409 20:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:18.668 20:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:18.668 20:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:18.668 20:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:18.668 20:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:18.668 20:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:18.668 20:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:18.668 20:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:18.668 20:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:18.668 20:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:18.668 20:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:18.668 20:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:18.668 20:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:18.668 20:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:18.668 20:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:18.668 20:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:18.668 20:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:18.668 20:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:18.668 20:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:18.668 20:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:18.668 20:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:18.928 20:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:18.928 20:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:18.928 20:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:18.928 20:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:18.928 20:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:18.928 20:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:18.928 20:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:18.928 20:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:18.928 20:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:19.187 20:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:19.187 20:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:19.187 20:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:19.187 20:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:19.187 20:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:19.187 20:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:19.187 20:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:19.187 20:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:19.187 20:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:19.187 20:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:19.187 20:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:19.446 20:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:19.446 20:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:19.446 20:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:19.446 20:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:13:19.446 20:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:19.446 20:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:13:19.446 20:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:19.446 20:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:13:19.446 20:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:19.446 20:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:19.446 rmmod nvme_tcp 00:13:19.446 rmmod nvme_fabrics 00:13:19.446 rmmod nvme_keyring 00:13:19.446 20:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:19.446 20:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:13:19.446 20:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:13:19.446 20:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 84362 ']' 00:13:19.446 20:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 84362 00:13:19.446 20:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@946 -- # '[' -z 84362 ']' 00:13:19.446 20:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # kill -0 84362 00:13:19.446 20:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # uname 00:13:19.446 20:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:19.446 20:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 84362 00:13:19.446 20:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:13:19.446 20:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:13:19.446 killing process with pid 84362 00:13:19.446 20:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 84362' 00:13:19.446 20:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@965 -- # kill 84362 00:13:19.446 20:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # wait 84362 00:13:19.705 20:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:19.705 20:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:19.705 20:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:19.705 20:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:19.705 20:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:19.705 20:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:19.705 20:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:19.705 20:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:19.705 20:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:19.705 ************************************ 00:13:19.705 END TEST nvmf_ns_hotplug_stress 00:13:19.705 ************************************ 00:13:19.705 00:13:19.705 real 0m43.253s 00:13:19.705 user 3m25.023s 00:13:19.705 sys 0m13.654s 00:13:19.705 20:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:19.705 20:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:19.705 20:13:08 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:19.705 20:13:08 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:19.705 20:13:08 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:19.705 20:13:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:19.705 ************************************ 00:13:19.705 START TEST nvmf_connect_stress 00:13:19.705 ************************************ 00:13:19.705 20:13:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:19.964 * Looking for test storage... 00:13:19.964 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:19.964 20:13:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:19.964 20:13:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:13:19.964 20:13:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:19.964 20:13:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:19.964 20:13:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:19.964 20:13:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:19.964 20:13:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:19.964 20:13:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:19.964 20:13:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:19.964 20:13:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:19.964 20:13:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:19.964 20:13:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:19.964 20:13:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:13:19.964 20:13:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:13:19.964 20:13:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:19.964 20:13:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:19.964 20:13:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:19.964 20:13:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:19.964 20:13:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:19.964 20:13:08 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:19.964 20:13:08 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:19.964 20:13:08 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:19.964 20:13:08 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.965 20:13:08 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.965 20:13:08 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.965 20:13:08 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:13:19.965 20:13:08 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.965 20:13:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:13:19.965 20:13:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:19.965 20:13:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:19.965 20:13:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:19.965 20:13:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:19.965 20:13:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:19.965 20:13:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:19.965 20:13:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:19.965 20:13:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:19.965 20:13:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:19.965 20:13:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:19.965 20:13:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:19.965 20:13:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:19.965 20:13:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:19.965 20:13:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:19.965 20:13:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:19.965 20:13:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:19.965 20:13:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:19.965 20:13:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:19.965 20:13:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:19.965 20:13:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:19.965 20:13:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:19.965 20:13:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:19.965 20:13:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:19.965 20:13:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:19.965 20:13:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:19.965 20:13:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:19.965 20:13:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:19.965 20:13:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:19.965 20:13:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:19.965 20:13:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:19.965 20:13:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:19.965 20:13:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:19.965 20:13:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:19.965 20:13:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:19.965 20:13:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:19.965 20:13:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:19.965 20:13:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:19.965 Cannot find device "nvmf_tgt_br" 00:13:19.965 20:13:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@155 -- # true 00:13:19.965 20:13:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:19.965 Cannot find device "nvmf_tgt_br2" 00:13:19.965 20:13:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@156 -- # true 00:13:19.965 20:13:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:19.965 20:13:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:19.965 Cannot find device "nvmf_tgt_br" 00:13:19.965 20:13:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@158 -- # true 00:13:19.965 20:13:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:19.965 Cannot find device "nvmf_tgt_br2" 00:13:19.965 20:13:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@159 -- # true 00:13:19.965 20:13:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:19.965 20:13:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:19.965 20:13:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:19.965 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:19.965 20:13:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@162 -- # true 00:13:19.965 20:13:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:19.965 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:19.965 20:13:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@163 -- # true 00:13:19.965 20:13:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:19.965 20:13:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:19.965 20:13:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:19.965 20:13:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:20.224 20:13:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:20.225 20:13:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:20.225 20:13:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:20.225 20:13:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:20.225 20:13:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:20.225 20:13:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:20.225 20:13:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:20.225 20:13:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:20.225 20:13:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:20.225 20:13:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:20.225 20:13:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:20.225 20:13:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:20.225 20:13:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:20.225 20:13:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:20.225 20:13:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:20.225 20:13:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:20.225 20:13:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:20.225 20:13:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:20.225 20:13:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:20.225 20:13:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:20.225 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:20.225 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:13:20.225 00:13:20.225 --- 10.0.0.2 ping statistics --- 00:13:20.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:20.225 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:13:20.225 20:13:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:20.225 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:20.225 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:13:20.225 00:13:20.225 --- 10.0.0.3 ping statistics --- 00:13:20.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:20.225 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:13:20.225 20:13:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:20.225 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:20.225 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:13:20.225 00:13:20.225 --- 10.0.0.1 ping statistics --- 00:13:20.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:20.225 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:13:20.225 20:13:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:20.225 20:13:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@433 -- # return 0 00:13:20.225 20:13:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:20.225 20:13:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:20.225 20:13:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:20.225 20:13:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:20.225 20:13:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:20.225 20:13:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:20.225 20:13:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:20.225 20:13:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:20.225 20:13:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:20.225 20:13:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:20.225 20:13:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:20.225 20:13:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=86868 00:13:20.225 20:13:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:20.225 20:13:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 86868 00:13:20.225 20:13:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@827 -- # '[' -z 86868 ']' 00:13:20.225 20:13:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:20.225 20:13:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:20.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:20.225 20:13:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:20.225 20:13:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:20.225 20:13:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:20.225 [2024-07-14 20:13:09.294686] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:13:20.225 [2024-07-14 20:13:09.294793] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:20.485 [2024-07-14 20:13:09.428072] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:20.485 [2024-07-14 20:13:09.523572] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:20.485 [2024-07-14 20:13:09.523647] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:20.485 [2024-07-14 20:13:09.523657] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:20.485 [2024-07-14 20:13:09.523664] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:20.485 [2024-07-14 20:13:09.523670] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:20.485 [2024-07-14 20:13:09.523840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:20.485 [2024-07-14 20:13:09.524650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:20.485 [2024-07-14 20:13:09.524700] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:21.448 20:13:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:21.448 20:13:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@860 -- # return 0 00:13:21.448 20:13:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:21.448 20:13:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:21.448 20:13:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:21.448 20:13:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:21.448 20:13:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:21.448 20:13:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.448 20:13:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:21.448 [2024-07-14 20:13:10.278780] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:21.448 20:13:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.448 20:13:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:21.448 20:13:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.448 20:13:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:21.448 20:13:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.448 20:13:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:21.448 20:13:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.448 20:13:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:21.448 [2024-07-14 20:13:10.296973] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:21.448 20:13:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.448 20:13:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:21.448 20:13:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.448 20:13:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:21.448 NULL1 00:13:21.448 20:13:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.448 20:13:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=86920 00:13:21.448 20:13:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:13:21.448 20:13:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:13:21.448 20:13:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:21.448 20:13:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:13:21.448 20:13:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:21.448 20:13:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:21.448 20:13:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:21.448 20:13:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:21.448 20:13:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:21.448 20:13:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:21.448 20:13:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:21.448 20:13:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:21.448 20:13:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:21.448 20:13:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:21.448 20:13:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:21.448 20:13:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:21.448 20:13:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:21.448 20:13:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:21.448 20:13:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:21.448 20:13:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:21.448 20:13:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:21.448 20:13:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:21.448 20:13:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:21.448 20:13:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:21.448 20:13:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:21.448 20:13:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:21.448 20:13:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:21.448 20:13:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:21.448 20:13:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:21.448 20:13:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:21.448 20:13:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:21.448 20:13:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:21.448 20:13:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:21.448 20:13:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:21.448 20:13:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:21.448 20:13:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:21.448 20:13:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:21.448 20:13:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:21.448 20:13:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:21.448 20:13:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:21.448 20:13:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:21.448 20:13:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:21.448 20:13:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:21.448 20:13:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:21.448 20:13:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 86920 00:13:21.448 20:13:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:21.448 20:13:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.448 20:13:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:21.707 20:13:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.707 20:13:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 86920 00:13:21.707 20:13:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:21.707 20:13:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.707 20:13:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:21.965 20:13:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.965 20:13:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 86920 00:13:21.965 20:13:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:21.965 20:13:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.965 20:13:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:22.530 20:13:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:22.530 20:13:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 86920 00:13:22.530 20:13:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:22.530 20:13:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:22.530 20:13:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:22.788 20:13:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:22.788 20:13:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 86920 00:13:22.788 20:13:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:22.788 20:13:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:22.788 20:13:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:23.047 20:13:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.047 20:13:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 86920 00:13:23.047 20:13:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:23.047 20:13:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.047 20:13:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:23.305 20:13:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.305 20:13:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 86920 00:13:23.305 20:13:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:23.305 20:13:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.305 20:13:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:23.564 20:13:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.564 20:13:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 86920 00:13:23.564 20:13:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:23.564 20:13:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.564 20:13:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:24.131 20:13:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:24.131 20:13:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 86920 00:13:24.131 20:13:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:24.131 20:13:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:24.131 20:13:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:24.388 20:13:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:24.388 20:13:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 86920 00:13:24.388 20:13:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:24.388 20:13:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:24.388 20:13:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:24.644 20:13:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:24.644 20:13:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 86920 00:13:24.644 20:13:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:24.644 20:13:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:24.644 20:13:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:24.902 20:13:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:24.902 20:13:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 86920 00:13:24.902 20:13:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:24.902 20:13:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:24.902 20:13:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:25.160 20:13:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.160 20:13:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 86920 00:13:25.160 20:13:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:25.160 20:13:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.160 20:13:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:25.728 20:13:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.728 20:13:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 86920 00:13:25.728 20:13:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:25.728 20:13:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.728 20:13:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:25.987 20:13:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.987 20:13:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 86920 00:13:25.987 20:13:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:25.987 20:13:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.987 20:13:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:26.246 20:13:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:26.246 20:13:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 86920 00:13:26.246 20:13:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:26.246 20:13:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:26.246 20:13:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:26.504 20:13:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:26.504 20:13:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 86920 00:13:26.504 20:13:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:26.504 20:13:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:26.504 20:13:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:27.071 20:13:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:27.071 20:13:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 86920 00:13:27.071 20:13:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:27.071 20:13:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:27.071 20:13:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:27.330 20:13:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:27.330 20:13:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 86920 00:13:27.330 20:13:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:27.330 20:13:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:27.330 20:13:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:27.588 20:13:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:27.588 20:13:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 86920 00:13:27.588 20:13:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:27.588 20:13:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:27.588 20:13:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:27.847 20:13:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:27.847 20:13:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 86920 00:13:27.847 20:13:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:27.847 20:13:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:27.847 20:13:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:28.105 20:13:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:28.105 20:13:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 86920 00:13:28.105 20:13:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:28.105 20:13:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.105 20:13:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:28.672 20:13:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:28.673 20:13:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 86920 00:13:28.673 20:13:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:28.673 20:13:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.673 20:13:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:28.930 20:13:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:28.930 20:13:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 86920 00:13:28.930 20:13:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:28.930 20:13:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.930 20:13:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:29.188 20:13:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.188 20:13:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 86920 00:13:29.188 20:13:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:29.188 20:13:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.188 20:13:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:29.445 20:13:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.445 20:13:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 86920 00:13:29.445 20:13:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:29.445 20:13:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.445 20:13:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:29.702 20:13:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.702 20:13:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 86920 00:13:29.702 20:13:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:29.702 20:13:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.702 20:13:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:30.267 20:13:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.267 20:13:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 86920 00:13:30.267 20:13:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:30.267 20:13:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.267 20:13:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:30.557 20:13:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.557 20:13:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 86920 00:13:30.557 20:13:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:30.557 20:13:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.557 20:13:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:30.815 20:13:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.815 20:13:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 86920 00:13:30.815 20:13:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:30.815 20:13:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.815 20:13:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:31.073 20:13:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.073 20:13:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 86920 00:13:31.073 20:13:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:31.073 20:13:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.073 20:13:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:31.330 20:13:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.330 20:13:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 86920 00:13:31.330 20:13:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:31.330 20:13:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.330 20:13:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:31.588 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:31.847 20:13:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.847 20:13:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 86920 00:13:31.847 /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (86920) - No such process 00:13:31.847 20:13:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 86920 00:13:31.847 20:13:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:13:31.847 20:13:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:31.847 20:13:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:31.847 20:13:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:31.847 20:13:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:13:31.847 20:13:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:31.847 20:13:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:13:31.847 20:13:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:31.847 20:13:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:31.847 rmmod nvme_tcp 00:13:31.847 rmmod nvme_fabrics 00:13:31.847 rmmod nvme_keyring 00:13:31.847 20:13:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:31.847 20:13:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:13:31.847 20:13:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:13:31.847 20:13:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 86868 ']' 00:13:31.847 20:13:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 86868 00:13:31.847 20:13:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@946 -- # '[' -z 86868 ']' 00:13:31.847 20:13:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@950 -- # kill -0 86868 00:13:31.847 20:13:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # uname 00:13:31.847 20:13:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:31.847 20:13:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 86868 00:13:31.847 killing process with pid 86868 00:13:31.847 20:13:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:13:31.847 20:13:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:13:31.847 20:13:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 86868' 00:13:31.847 20:13:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@965 -- # kill 86868 00:13:31.847 20:13:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@970 -- # wait 86868 00:13:32.105 20:13:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:32.105 20:13:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:32.105 20:13:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:32.105 20:13:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:32.105 20:13:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:32.105 20:13:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:32.105 20:13:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:32.105 20:13:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:32.105 20:13:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:32.105 00:13:32.105 real 0m12.409s 00:13:32.105 user 0m41.000s 00:13:32.105 sys 0m3.501s 00:13:32.105 20:13:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:32.105 ************************************ 00:13:32.105 END TEST nvmf_connect_stress 00:13:32.106 ************************************ 00:13:32.106 20:13:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:32.366 20:13:21 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:32.366 20:13:21 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:32.366 20:13:21 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:32.366 20:13:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:32.366 ************************************ 00:13:32.366 START TEST nvmf_fused_ordering 00:13:32.366 ************************************ 00:13:32.366 20:13:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:32.366 * Looking for test storage... 00:13:32.366 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:32.366 20:13:21 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:32.366 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:13:32.366 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:32.366 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:32.366 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:32.366 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:32.366 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:32.366 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:32.366 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:32.366 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:32.366 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:32.366 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:32.366 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:13:32.366 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:13:32.366 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:32.366 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:32.366 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:32.366 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:32.366 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:32.366 20:13:21 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:32.366 20:13:21 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:32.366 20:13:21 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:32.367 20:13:21 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.367 20:13:21 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.367 20:13:21 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.367 20:13:21 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:13:32.367 20:13:21 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.367 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:13:32.367 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:32.367 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:32.367 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:32.367 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:32.367 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:32.367 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:32.367 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:32.367 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:32.367 20:13:21 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:32.367 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:32.367 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:32.367 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:32.367 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:32.367 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:32.367 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:32.367 20:13:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:32.367 20:13:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:32.367 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:32.367 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:32.367 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:32.367 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:32.367 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:32.367 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:32.367 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:32.367 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:32.367 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:32.367 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:32.367 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:32.367 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:32.367 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:32.367 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:32.367 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:32.367 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:32.367 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:32.367 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:32.367 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:32.367 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:32.367 Cannot find device "nvmf_tgt_br" 00:13:32.367 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@155 -- # true 00:13:32.367 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:32.367 Cannot find device "nvmf_tgt_br2" 00:13:32.367 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@156 -- # true 00:13:32.367 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:32.367 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:32.367 Cannot find device "nvmf_tgt_br" 00:13:32.367 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@158 -- # true 00:13:32.367 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:32.367 Cannot find device "nvmf_tgt_br2" 00:13:32.367 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@159 -- # true 00:13:32.367 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:32.367 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:32.627 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:32.627 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:32.627 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@162 -- # true 00:13:32.627 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:32.627 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:32.627 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@163 -- # true 00:13:32.627 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:32.627 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:32.627 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:32.627 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:32.627 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:32.627 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:32.627 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:32.627 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:32.627 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:32.627 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:32.627 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:32.627 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:32.627 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:32.627 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:32.627 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:32.627 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:32.627 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:32.627 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:32.627 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:32.627 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:32.627 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:32.627 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:32.627 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:32.627 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:32.627 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:32.627 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.098 ms 00:13:32.627 00:13:32.627 --- 10.0.0.2 ping statistics --- 00:13:32.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:32.627 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:13:32.627 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:32.627 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:32.627 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:13:32.627 00:13:32.627 --- 10.0.0.3 ping statistics --- 00:13:32.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:32.627 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:13:32.627 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:32.627 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:32.627 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:13:32.627 00:13:32.627 --- 10.0.0.1 ping statistics --- 00:13:32.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:32.627 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:13:32.627 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:32.627 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@433 -- # return 0 00:13:32.627 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:32.627 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:32.627 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:32.627 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:32.627 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:32.627 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:32.627 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:32.627 20:13:21 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:32.627 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:32.627 20:13:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:32.627 20:13:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:32.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:32.627 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=87249 00:13:32.627 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 87249 00:13:32.627 20:13:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@827 -- # '[' -z 87249 ']' 00:13:32.627 20:13:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:32.627 20:13:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:32.627 20:13:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:32.627 20:13:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:32.627 20:13:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:32.627 20:13:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:32.886 [2024-07-14 20:13:21.728390] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:13:32.886 [2024-07-14 20:13:21.728500] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:32.886 [2024-07-14 20:13:21.870192] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:33.145 [2024-07-14 20:13:21.978496] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:33.145 [2024-07-14 20:13:21.978562] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:33.145 [2024-07-14 20:13:21.978574] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:33.145 [2024-07-14 20:13:21.978582] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:33.145 [2024-07-14 20:13:21.978590] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:33.145 [2024-07-14 20:13:21.978627] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:33.713 20:13:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:33.713 20:13:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # return 0 00:13:33.713 20:13:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:33.713 20:13:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:33.713 20:13:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:33.713 20:13:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:33.713 20:13:22 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:33.713 20:13:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.713 20:13:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:33.713 [2024-07-14 20:13:22.783927] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:33.713 20:13:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.713 20:13:22 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:33.713 20:13:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.713 20:13:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:33.972 20:13:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.972 20:13:22 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:33.972 20:13:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.972 20:13:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:33.972 [2024-07-14 20:13:22.800094] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:33.972 20:13:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.972 20:13:22 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:33.972 20:13:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.972 20:13:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:33.972 NULL1 00:13:33.972 20:13:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.972 20:13:22 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:33.972 20:13:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.972 20:13:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:33.972 20:13:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.972 20:13:22 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:33.972 20:13:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.972 20:13:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:33.972 20:13:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.972 20:13:22 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:33.972 [2024-07-14 20:13:22.849789] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:13:33.972 [2024-07-14 20:13:22.849822] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87299 ] 00:13:34.234 Attached to nqn.2016-06.io.spdk:cnode1 00:13:34.234 Namespace ID: 1 size: 1GB 00:13:34.234 fused_ordering(0) 00:13:34.234 fused_ordering(1) 00:13:34.234 fused_ordering(2) 00:13:34.234 fused_ordering(3) 00:13:34.234 fused_ordering(4) 00:13:34.234 fused_ordering(5) 00:13:34.234 fused_ordering(6) 00:13:34.234 fused_ordering(7) 00:13:34.234 fused_ordering(8) 00:13:34.234 fused_ordering(9) 00:13:34.234 fused_ordering(10) 00:13:34.234 fused_ordering(11) 00:13:34.234 fused_ordering(12) 00:13:34.234 fused_ordering(13) 00:13:34.234 fused_ordering(14) 00:13:34.234 fused_ordering(15) 00:13:34.234 fused_ordering(16) 00:13:34.234 fused_ordering(17) 00:13:34.234 fused_ordering(18) 00:13:34.234 fused_ordering(19) 00:13:34.234 fused_ordering(20) 00:13:34.234 fused_ordering(21) 00:13:34.234 fused_ordering(22) 00:13:34.234 fused_ordering(23) 00:13:34.234 fused_ordering(24) 00:13:34.234 fused_ordering(25) 00:13:34.234 fused_ordering(26) 00:13:34.234 fused_ordering(27) 00:13:34.234 fused_ordering(28) 00:13:34.234 fused_ordering(29) 00:13:34.234 fused_ordering(30) 00:13:34.234 fused_ordering(31) 00:13:34.234 fused_ordering(32) 00:13:34.234 fused_ordering(33) 00:13:34.234 fused_ordering(34) 00:13:34.234 fused_ordering(35) 00:13:34.234 fused_ordering(36) 00:13:34.234 fused_ordering(37) 00:13:34.234 fused_ordering(38) 00:13:34.234 fused_ordering(39) 00:13:34.234 fused_ordering(40) 00:13:34.234 fused_ordering(41) 00:13:34.234 fused_ordering(42) 00:13:34.234 fused_ordering(43) 00:13:34.234 fused_ordering(44) 00:13:34.234 fused_ordering(45) 00:13:34.234 fused_ordering(46) 00:13:34.234 fused_ordering(47) 00:13:34.234 fused_ordering(48) 00:13:34.234 fused_ordering(49) 00:13:34.234 fused_ordering(50) 00:13:34.234 fused_ordering(51) 00:13:34.234 fused_ordering(52) 00:13:34.234 fused_ordering(53) 00:13:34.234 fused_ordering(54) 00:13:34.234 fused_ordering(55) 00:13:34.234 fused_ordering(56) 00:13:34.234 fused_ordering(57) 00:13:34.234 fused_ordering(58) 00:13:34.234 fused_ordering(59) 00:13:34.234 fused_ordering(60) 00:13:34.234 fused_ordering(61) 00:13:34.234 fused_ordering(62) 00:13:34.234 fused_ordering(63) 00:13:34.234 fused_ordering(64) 00:13:34.234 fused_ordering(65) 00:13:34.234 fused_ordering(66) 00:13:34.234 fused_ordering(67) 00:13:34.234 fused_ordering(68) 00:13:34.234 fused_ordering(69) 00:13:34.234 fused_ordering(70) 00:13:34.234 fused_ordering(71) 00:13:34.234 fused_ordering(72) 00:13:34.235 fused_ordering(73) 00:13:34.235 fused_ordering(74) 00:13:34.235 fused_ordering(75) 00:13:34.235 fused_ordering(76) 00:13:34.235 fused_ordering(77) 00:13:34.235 fused_ordering(78) 00:13:34.235 fused_ordering(79) 00:13:34.235 fused_ordering(80) 00:13:34.235 fused_ordering(81) 00:13:34.235 fused_ordering(82) 00:13:34.235 fused_ordering(83) 00:13:34.235 fused_ordering(84) 00:13:34.235 fused_ordering(85) 00:13:34.235 fused_ordering(86) 00:13:34.235 fused_ordering(87) 00:13:34.235 fused_ordering(88) 00:13:34.235 fused_ordering(89) 00:13:34.235 fused_ordering(90) 00:13:34.235 fused_ordering(91) 00:13:34.235 fused_ordering(92) 00:13:34.235 fused_ordering(93) 00:13:34.235 fused_ordering(94) 00:13:34.235 fused_ordering(95) 00:13:34.235 fused_ordering(96) 00:13:34.235 fused_ordering(97) 00:13:34.235 fused_ordering(98) 00:13:34.235 fused_ordering(99) 00:13:34.235 fused_ordering(100) 00:13:34.235 fused_ordering(101) 00:13:34.235 fused_ordering(102) 00:13:34.235 fused_ordering(103) 00:13:34.235 fused_ordering(104) 00:13:34.235 fused_ordering(105) 00:13:34.235 fused_ordering(106) 00:13:34.235 fused_ordering(107) 00:13:34.235 fused_ordering(108) 00:13:34.235 fused_ordering(109) 00:13:34.235 fused_ordering(110) 00:13:34.235 fused_ordering(111) 00:13:34.235 fused_ordering(112) 00:13:34.235 fused_ordering(113) 00:13:34.235 fused_ordering(114) 00:13:34.235 fused_ordering(115) 00:13:34.235 fused_ordering(116) 00:13:34.235 fused_ordering(117) 00:13:34.235 fused_ordering(118) 00:13:34.235 fused_ordering(119) 00:13:34.235 fused_ordering(120) 00:13:34.235 fused_ordering(121) 00:13:34.235 fused_ordering(122) 00:13:34.235 fused_ordering(123) 00:13:34.235 fused_ordering(124) 00:13:34.235 fused_ordering(125) 00:13:34.235 fused_ordering(126) 00:13:34.235 fused_ordering(127) 00:13:34.235 fused_ordering(128) 00:13:34.235 fused_ordering(129) 00:13:34.235 fused_ordering(130) 00:13:34.235 fused_ordering(131) 00:13:34.235 fused_ordering(132) 00:13:34.235 fused_ordering(133) 00:13:34.235 fused_ordering(134) 00:13:34.235 fused_ordering(135) 00:13:34.235 fused_ordering(136) 00:13:34.235 fused_ordering(137) 00:13:34.235 fused_ordering(138) 00:13:34.235 fused_ordering(139) 00:13:34.235 fused_ordering(140) 00:13:34.235 fused_ordering(141) 00:13:34.235 fused_ordering(142) 00:13:34.235 fused_ordering(143) 00:13:34.235 fused_ordering(144) 00:13:34.235 fused_ordering(145) 00:13:34.235 fused_ordering(146) 00:13:34.235 fused_ordering(147) 00:13:34.235 fused_ordering(148) 00:13:34.235 fused_ordering(149) 00:13:34.235 fused_ordering(150) 00:13:34.235 fused_ordering(151) 00:13:34.235 fused_ordering(152) 00:13:34.235 fused_ordering(153) 00:13:34.235 fused_ordering(154) 00:13:34.235 fused_ordering(155) 00:13:34.235 fused_ordering(156) 00:13:34.235 fused_ordering(157) 00:13:34.235 fused_ordering(158) 00:13:34.235 fused_ordering(159) 00:13:34.235 fused_ordering(160) 00:13:34.235 fused_ordering(161) 00:13:34.235 fused_ordering(162) 00:13:34.235 fused_ordering(163) 00:13:34.235 fused_ordering(164) 00:13:34.235 fused_ordering(165) 00:13:34.235 fused_ordering(166) 00:13:34.235 fused_ordering(167) 00:13:34.235 fused_ordering(168) 00:13:34.235 fused_ordering(169) 00:13:34.235 fused_ordering(170) 00:13:34.235 fused_ordering(171) 00:13:34.235 fused_ordering(172) 00:13:34.235 fused_ordering(173) 00:13:34.235 fused_ordering(174) 00:13:34.235 fused_ordering(175) 00:13:34.235 fused_ordering(176) 00:13:34.235 fused_ordering(177) 00:13:34.235 fused_ordering(178) 00:13:34.235 fused_ordering(179) 00:13:34.235 fused_ordering(180) 00:13:34.235 fused_ordering(181) 00:13:34.235 fused_ordering(182) 00:13:34.235 fused_ordering(183) 00:13:34.235 fused_ordering(184) 00:13:34.235 fused_ordering(185) 00:13:34.235 fused_ordering(186) 00:13:34.235 fused_ordering(187) 00:13:34.235 fused_ordering(188) 00:13:34.235 fused_ordering(189) 00:13:34.235 fused_ordering(190) 00:13:34.235 fused_ordering(191) 00:13:34.235 fused_ordering(192) 00:13:34.235 fused_ordering(193) 00:13:34.235 fused_ordering(194) 00:13:34.235 fused_ordering(195) 00:13:34.235 fused_ordering(196) 00:13:34.235 fused_ordering(197) 00:13:34.235 fused_ordering(198) 00:13:34.235 fused_ordering(199) 00:13:34.235 fused_ordering(200) 00:13:34.235 fused_ordering(201) 00:13:34.235 fused_ordering(202) 00:13:34.235 fused_ordering(203) 00:13:34.235 fused_ordering(204) 00:13:34.235 fused_ordering(205) 00:13:34.495 fused_ordering(206) 00:13:34.495 fused_ordering(207) 00:13:34.495 fused_ordering(208) 00:13:34.495 fused_ordering(209) 00:13:34.495 fused_ordering(210) 00:13:34.495 fused_ordering(211) 00:13:34.495 fused_ordering(212) 00:13:34.495 fused_ordering(213) 00:13:34.495 fused_ordering(214) 00:13:34.495 fused_ordering(215) 00:13:34.495 fused_ordering(216) 00:13:34.495 fused_ordering(217) 00:13:34.495 fused_ordering(218) 00:13:34.495 fused_ordering(219) 00:13:34.495 fused_ordering(220) 00:13:34.495 fused_ordering(221) 00:13:34.495 fused_ordering(222) 00:13:34.495 fused_ordering(223) 00:13:34.495 fused_ordering(224) 00:13:34.495 fused_ordering(225) 00:13:34.495 fused_ordering(226) 00:13:34.495 fused_ordering(227) 00:13:34.495 fused_ordering(228) 00:13:34.495 fused_ordering(229) 00:13:34.495 fused_ordering(230) 00:13:34.495 fused_ordering(231) 00:13:34.495 fused_ordering(232) 00:13:34.495 fused_ordering(233) 00:13:34.495 fused_ordering(234) 00:13:34.495 fused_ordering(235) 00:13:34.495 fused_ordering(236) 00:13:34.495 fused_ordering(237) 00:13:34.495 fused_ordering(238) 00:13:34.496 fused_ordering(239) 00:13:34.496 fused_ordering(240) 00:13:34.496 fused_ordering(241) 00:13:34.496 fused_ordering(242) 00:13:34.496 fused_ordering(243) 00:13:34.496 fused_ordering(244) 00:13:34.496 fused_ordering(245) 00:13:34.496 fused_ordering(246) 00:13:34.496 fused_ordering(247) 00:13:34.496 fused_ordering(248) 00:13:34.496 fused_ordering(249) 00:13:34.496 fused_ordering(250) 00:13:34.496 fused_ordering(251) 00:13:34.496 fused_ordering(252) 00:13:34.496 fused_ordering(253) 00:13:34.496 fused_ordering(254) 00:13:34.496 fused_ordering(255) 00:13:34.496 fused_ordering(256) 00:13:34.496 fused_ordering(257) 00:13:34.496 fused_ordering(258) 00:13:34.496 fused_ordering(259) 00:13:34.496 fused_ordering(260) 00:13:34.496 fused_ordering(261) 00:13:34.496 fused_ordering(262) 00:13:34.496 fused_ordering(263) 00:13:34.496 fused_ordering(264) 00:13:34.496 fused_ordering(265) 00:13:34.496 fused_ordering(266) 00:13:34.496 fused_ordering(267) 00:13:34.496 fused_ordering(268) 00:13:34.496 fused_ordering(269) 00:13:34.496 fused_ordering(270) 00:13:34.496 fused_ordering(271) 00:13:34.496 fused_ordering(272) 00:13:34.496 fused_ordering(273) 00:13:34.496 fused_ordering(274) 00:13:34.496 fused_ordering(275) 00:13:34.496 fused_ordering(276) 00:13:34.496 fused_ordering(277) 00:13:34.496 fused_ordering(278) 00:13:34.496 fused_ordering(279) 00:13:34.496 fused_ordering(280) 00:13:34.496 fused_ordering(281) 00:13:34.496 fused_ordering(282) 00:13:34.496 fused_ordering(283) 00:13:34.496 fused_ordering(284) 00:13:34.496 fused_ordering(285) 00:13:34.496 fused_ordering(286) 00:13:34.496 fused_ordering(287) 00:13:34.496 fused_ordering(288) 00:13:34.496 fused_ordering(289) 00:13:34.496 fused_ordering(290) 00:13:34.496 fused_ordering(291) 00:13:34.496 fused_ordering(292) 00:13:34.496 fused_ordering(293) 00:13:34.496 fused_ordering(294) 00:13:34.496 fused_ordering(295) 00:13:34.496 fused_ordering(296) 00:13:34.496 fused_ordering(297) 00:13:34.496 fused_ordering(298) 00:13:34.496 fused_ordering(299) 00:13:34.496 fused_ordering(300) 00:13:34.496 fused_ordering(301) 00:13:34.496 fused_ordering(302) 00:13:34.496 fused_ordering(303) 00:13:34.496 fused_ordering(304) 00:13:34.496 fused_ordering(305) 00:13:34.496 fused_ordering(306) 00:13:34.496 fused_ordering(307) 00:13:34.496 fused_ordering(308) 00:13:34.496 fused_ordering(309) 00:13:34.496 fused_ordering(310) 00:13:34.496 fused_ordering(311) 00:13:34.496 fused_ordering(312) 00:13:34.496 fused_ordering(313) 00:13:34.496 fused_ordering(314) 00:13:34.496 fused_ordering(315) 00:13:34.496 fused_ordering(316) 00:13:34.496 fused_ordering(317) 00:13:34.496 fused_ordering(318) 00:13:34.496 fused_ordering(319) 00:13:34.496 fused_ordering(320) 00:13:34.496 fused_ordering(321) 00:13:34.496 fused_ordering(322) 00:13:34.496 fused_ordering(323) 00:13:34.496 fused_ordering(324) 00:13:34.496 fused_ordering(325) 00:13:34.496 fused_ordering(326) 00:13:34.496 fused_ordering(327) 00:13:34.496 fused_ordering(328) 00:13:34.496 fused_ordering(329) 00:13:34.496 fused_ordering(330) 00:13:34.496 fused_ordering(331) 00:13:34.496 fused_ordering(332) 00:13:34.496 fused_ordering(333) 00:13:34.496 fused_ordering(334) 00:13:34.496 fused_ordering(335) 00:13:34.496 fused_ordering(336) 00:13:34.496 fused_ordering(337) 00:13:34.496 fused_ordering(338) 00:13:34.496 fused_ordering(339) 00:13:34.496 fused_ordering(340) 00:13:34.496 fused_ordering(341) 00:13:34.496 fused_ordering(342) 00:13:34.496 fused_ordering(343) 00:13:34.496 fused_ordering(344) 00:13:34.496 fused_ordering(345) 00:13:34.496 fused_ordering(346) 00:13:34.496 fused_ordering(347) 00:13:34.496 fused_ordering(348) 00:13:34.496 fused_ordering(349) 00:13:34.496 fused_ordering(350) 00:13:34.496 fused_ordering(351) 00:13:34.496 fused_ordering(352) 00:13:34.496 fused_ordering(353) 00:13:34.496 fused_ordering(354) 00:13:34.496 fused_ordering(355) 00:13:34.496 fused_ordering(356) 00:13:34.496 fused_ordering(357) 00:13:34.496 fused_ordering(358) 00:13:34.496 fused_ordering(359) 00:13:34.496 fused_ordering(360) 00:13:34.496 fused_ordering(361) 00:13:34.496 fused_ordering(362) 00:13:34.496 fused_ordering(363) 00:13:34.496 fused_ordering(364) 00:13:34.496 fused_ordering(365) 00:13:34.496 fused_ordering(366) 00:13:34.496 fused_ordering(367) 00:13:34.496 fused_ordering(368) 00:13:34.496 fused_ordering(369) 00:13:34.496 fused_ordering(370) 00:13:34.496 fused_ordering(371) 00:13:34.496 fused_ordering(372) 00:13:34.496 fused_ordering(373) 00:13:34.496 fused_ordering(374) 00:13:34.496 fused_ordering(375) 00:13:34.496 fused_ordering(376) 00:13:34.496 fused_ordering(377) 00:13:34.496 fused_ordering(378) 00:13:34.496 fused_ordering(379) 00:13:34.496 fused_ordering(380) 00:13:34.496 fused_ordering(381) 00:13:34.496 fused_ordering(382) 00:13:34.496 fused_ordering(383) 00:13:34.496 fused_ordering(384) 00:13:34.496 fused_ordering(385) 00:13:34.496 fused_ordering(386) 00:13:34.496 fused_ordering(387) 00:13:34.496 fused_ordering(388) 00:13:34.496 fused_ordering(389) 00:13:34.496 fused_ordering(390) 00:13:34.496 fused_ordering(391) 00:13:34.496 fused_ordering(392) 00:13:34.496 fused_ordering(393) 00:13:34.496 fused_ordering(394) 00:13:34.496 fused_ordering(395) 00:13:34.496 fused_ordering(396) 00:13:34.496 fused_ordering(397) 00:13:34.496 fused_ordering(398) 00:13:34.496 fused_ordering(399) 00:13:34.496 fused_ordering(400) 00:13:34.496 fused_ordering(401) 00:13:34.496 fused_ordering(402) 00:13:34.496 fused_ordering(403) 00:13:34.496 fused_ordering(404) 00:13:34.496 fused_ordering(405) 00:13:34.496 fused_ordering(406) 00:13:34.496 fused_ordering(407) 00:13:34.496 fused_ordering(408) 00:13:34.496 fused_ordering(409) 00:13:34.496 fused_ordering(410) 00:13:35.063 fused_ordering(411) 00:13:35.063 fused_ordering(412) 00:13:35.063 fused_ordering(413) 00:13:35.063 fused_ordering(414) 00:13:35.063 fused_ordering(415) 00:13:35.063 fused_ordering(416) 00:13:35.063 fused_ordering(417) 00:13:35.063 fused_ordering(418) 00:13:35.063 fused_ordering(419) 00:13:35.063 fused_ordering(420) 00:13:35.063 fused_ordering(421) 00:13:35.063 fused_ordering(422) 00:13:35.063 fused_ordering(423) 00:13:35.063 fused_ordering(424) 00:13:35.063 fused_ordering(425) 00:13:35.063 fused_ordering(426) 00:13:35.063 fused_ordering(427) 00:13:35.063 fused_ordering(428) 00:13:35.063 fused_ordering(429) 00:13:35.063 fused_ordering(430) 00:13:35.063 fused_ordering(431) 00:13:35.063 fused_ordering(432) 00:13:35.063 fused_ordering(433) 00:13:35.063 fused_ordering(434) 00:13:35.063 fused_ordering(435) 00:13:35.063 fused_ordering(436) 00:13:35.063 fused_ordering(437) 00:13:35.063 fused_ordering(438) 00:13:35.063 fused_ordering(439) 00:13:35.063 fused_ordering(440) 00:13:35.063 fused_ordering(441) 00:13:35.063 fused_ordering(442) 00:13:35.063 fused_ordering(443) 00:13:35.063 fused_ordering(444) 00:13:35.063 fused_ordering(445) 00:13:35.063 fused_ordering(446) 00:13:35.063 fused_ordering(447) 00:13:35.063 fused_ordering(448) 00:13:35.063 fused_ordering(449) 00:13:35.063 fused_ordering(450) 00:13:35.063 fused_ordering(451) 00:13:35.063 fused_ordering(452) 00:13:35.063 fused_ordering(453) 00:13:35.063 fused_ordering(454) 00:13:35.063 fused_ordering(455) 00:13:35.063 fused_ordering(456) 00:13:35.063 fused_ordering(457) 00:13:35.063 fused_ordering(458) 00:13:35.063 fused_ordering(459) 00:13:35.063 fused_ordering(460) 00:13:35.063 fused_ordering(461) 00:13:35.063 fused_ordering(462) 00:13:35.063 fused_ordering(463) 00:13:35.063 fused_ordering(464) 00:13:35.063 fused_ordering(465) 00:13:35.063 fused_ordering(466) 00:13:35.063 fused_ordering(467) 00:13:35.063 fused_ordering(468) 00:13:35.063 fused_ordering(469) 00:13:35.063 fused_ordering(470) 00:13:35.063 fused_ordering(471) 00:13:35.063 fused_ordering(472) 00:13:35.063 fused_ordering(473) 00:13:35.063 fused_ordering(474) 00:13:35.063 fused_ordering(475) 00:13:35.063 fused_ordering(476) 00:13:35.063 fused_ordering(477) 00:13:35.063 fused_ordering(478) 00:13:35.063 fused_ordering(479) 00:13:35.063 fused_ordering(480) 00:13:35.063 fused_ordering(481) 00:13:35.063 fused_ordering(482) 00:13:35.063 fused_ordering(483) 00:13:35.063 fused_ordering(484) 00:13:35.063 fused_ordering(485) 00:13:35.063 fused_ordering(486) 00:13:35.063 fused_ordering(487) 00:13:35.063 fused_ordering(488) 00:13:35.063 fused_ordering(489) 00:13:35.063 fused_ordering(490) 00:13:35.063 fused_ordering(491) 00:13:35.063 fused_ordering(492) 00:13:35.063 fused_ordering(493) 00:13:35.063 fused_ordering(494) 00:13:35.063 fused_ordering(495) 00:13:35.063 fused_ordering(496) 00:13:35.063 fused_ordering(497) 00:13:35.063 fused_ordering(498) 00:13:35.063 fused_ordering(499) 00:13:35.063 fused_ordering(500) 00:13:35.063 fused_ordering(501) 00:13:35.063 fused_ordering(502) 00:13:35.063 fused_ordering(503) 00:13:35.063 fused_ordering(504) 00:13:35.063 fused_ordering(505) 00:13:35.063 fused_ordering(506) 00:13:35.063 fused_ordering(507) 00:13:35.063 fused_ordering(508) 00:13:35.063 fused_ordering(509) 00:13:35.063 fused_ordering(510) 00:13:35.063 fused_ordering(511) 00:13:35.063 fused_ordering(512) 00:13:35.063 fused_ordering(513) 00:13:35.063 fused_ordering(514) 00:13:35.063 fused_ordering(515) 00:13:35.063 fused_ordering(516) 00:13:35.063 fused_ordering(517) 00:13:35.063 fused_ordering(518) 00:13:35.063 fused_ordering(519) 00:13:35.063 fused_ordering(520) 00:13:35.063 fused_ordering(521) 00:13:35.063 fused_ordering(522) 00:13:35.063 fused_ordering(523) 00:13:35.063 fused_ordering(524) 00:13:35.063 fused_ordering(525) 00:13:35.063 fused_ordering(526) 00:13:35.063 fused_ordering(527) 00:13:35.063 fused_ordering(528) 00:13:35.063 fused_ordering(529) 00:13:35.063 fused_ordering(530) 00:13:35.063 fused_ordering(531) 00:13:35.063 fused_ordering(532) 00:13:35.063 fused_ordering(533) 00:13:35.063 fused_ordering(534) 00:13:35.063 fused_ordering(535) 00:13:35.063 fused_ordering(536) 00:13:35.063 fused_ordering(537) 00:13:35.063 fused_ordering(538) 00:13:35.063 fused_ordering(539) 00:13:35.063 fused_ordering(540) 00:13:35.063 fused_ordering(541) 00:13:35.063 fused_ordering(542) 00:13:35.063 fused_ordering(543) 00:13:35.063 fused_ordering(544) 00:13:35.063 fused_ordering(545) 00:13:35.063 fused_ordering(546) 00:13:35.063 fused_ordering(547) 00:13:35.063 fused_ordering(548) 00:13:35.063 fused_ordering(549) 00:13:35.063 fused_ordering(550) 00:13:35.063 fused_ordering(551) 00:13:35.063 fused_ordering(552) 00:13:35.063 fused_ordering(553) 00:13:35.063 fused_ordering(554) 00:13:35.063 fused_ordering(555) 00:13:35.063 fused_ordering(556) 00:13:35.063 fused_ordering(557) 00:13:35.063 fused_ordering(558) 00:13:35.063 fused_ordering(559) 00:13:35.063 fused_ordering(560) 00:13:35.063 fused_ordering(561) 00:13:35.063 fused_ordering(562) 00:13:35.063 fused_ordering(563) 00:13:35.063 fused_ordering(564) 00:13:35.063 fused_ordering(565) 00:13:35.063 fused_ordering(566) 00:13:35.063 fused_ordering(567) 00:13:35.063 fused_ordering(568) 00:13:35.063 fused_ordering(569) 00:13:35.063 fused_ordering(570) 00:13:35.063 fused_ordering(571) 00:13:35.063 fused_ordering(572) 00:13:35.063 fused_ordering(573) 00:13:35.063 fused_ordering(574) 00:13:35.063 fused_ordering(575) 00:13:35.063 fused_ordering(576) 00:13:35.063 fused_ordering(577) 00:13:35.063 fused_ordering(578) 00:13:35.063 fused_ordering(579) 00:13:35.063 fused_ordering(580) 00:13:35.063 fused_ordering(581) 00:13:35.063 fused_ordering(582) 00:13:35.063 fused_ordering(583) 00:13:35.063 fused_ordering(584) 00:13:35.063 fused_ordering(585) 00:13:35.063 fused_ordering(586) 00:13:35.063 fused_ordering(587) 00:13:35.063 fused_ordering(588) 00:13:35.063 fused_ordering(589) 00:13:35.063 fused_ordering(590) 00:13:35.063 fused_ordering(591) 00:13:35.063 fused_ordering(592) 00:13:35.063 fused_ordering(593) 00:13:35.063 fused_ordering(594) 00:13:35.063 fused_ordering(595) 00:13:35.063 fused_ordering(596) 00:13:35.063 fused_ordering(597) 00:13:35.063 fused_ordering(598) 00:13:35.063 fused_ordering(599) 00:13:35.063 fused_ordering(600) 00:13:35.063 fused_ordering(601) 00:13:35.063 fused_ordering(602) 00:13:35.063 fused_ordering(603) 00:13:35.063 fused_ordering(604) 00:13:35.063 fused_ordering(605) 00:13:35.063 fused_ordering(606) 00:13:35.063 fused_ordering(607) 00:13:35.063 fused_ordering(608) 00:13:35.063 fused_ordering(609) 00:13:35.063 fused_ordering(610) 00:13:35.063 fused_ordering(611) 00:13:35.063 fused_ordering(612) 00:13:35.063 fused_ordering(613) 00:13:35.063 fused_ordering(614) 00:13:35.063 fused_ordering(615) 00:13:35.322 fused_ordering(616) 00:13:35.322 fused_ordering(617) 00:13:35.322 fused_ordering(618) 00:13:35.322 fused_ordering(619) 00:13:35.322 fused_ordering(620) 00:13:35.322 fused_ordering(621) 00:13:35.322 fused_ordering(622) 00:13:35.322 fused_ordering(623) 00:13:35.322 fused_ordering(624) 00:13:35.322 fused_ordering(625) 00:13:35.322 fused_ordering(626) 00:13:35.322 fused_ordering(627) 00:13:35.322 fused_ordering(628) 00:13:35.322 fused_ordering(629) 00:13:35.322 fused_ordering(630) 00:13:35.322 fused_ordering(631) 00:13:35.322 fused_ordering(632) 00:13:35.322 fused_ordering(633) 00:13:35.322 fused_ordering(634) 00:13:35.322 fused_ordering(635) 00:13:35.322 fused_ordering(636) 00:13:35.322 fused_ordering(637) 00:13:35.322 fused_ordering(638) 00:13:35.322 fused_ordering(639) 00:13:35.322 fused_ordering(640) 00:13:35.322 fused_ordering(641) 00:13:35.322 fused_ordering(642) 00:13:35.322 fused_ordering(643) 00:13:35.322 fused_ordering(644) 00:13:35.322 fused_ordering(645) 00:13:35.322 fused_ordering(646) 00:13:35.322 fused_ordering(647) 00:13:35.322 fused_ordering(648) 00:13:35.322 fused_ordering(649) 00:13:35.322 fused_ordering(650) 00:13:35.322 fused_ordering(651) 00:13:35.322 fused_ordering(652) 00:13:35.322 fused_ordering(653) 00:13:35.322 fused_ordering(654) 00:13:35.322 fused_ordering(655) 00:13:35.322 fused_ordering(656) 00:13:35.322 fused_ordering(657) 00:13:35.322 fused_ordering(658) 00:13:35.322 fused_ordering(659) 00:13:35.322 fused_ordering(660) 00:13:35.322 fused_ordering(661) 00:13:35.322 fused_ordering(662) 00:13:35.322 fused_ordering(663) 00:13:35.322 fused_ordering(664) 00:13:35.322 fused_ordering(665) 00:13:35.322 fused_ordering(666) 00:13:35.322 fused_ordering(667) 00:13:35.322 fused_ordering(668) 00:13:35.322 fused_ordering(669) 00:13:35.322 fused_ordering(670) 00:13:35.322 fused_ordering(671) 00:13:35.322 fused_ordering(672) 00:13:35.322 fused_ordering(673) 00:13:35.322 fused_ordering(674) 00:13:35.322 fused_ordering(675) 00:13:35.322 fused_ordering(676) 00:13:35.322 fused_ordering(677) 00:13:35.322 fused_ordering(678) 00:13:35.322 fused_ordering(679) 00:13:35.322 fused_ordering(680) 00:13:35.322 fused_ordering(681) 00:13:35.322 fused_ordering(682) 00:13:35.322 fused_ordering(683) 00:13:35.322 fused_ordering(684) 00:13:35.322 fused_ordering(685) 00:13:35.322 fused_ordering(686) 00:13:35.322 fused_ordering(687) 00:13:35.322 fused_ordering(688) 00:13:35.322 fused_ordering(689) 00:13:35.322 fused_ordering(690) 00:13:35.322 fused_ordering(691) 00:13:35.322 fused_ordering(692) 00:13:35.322 fused_ordering(693) 00:13:35.322 fused_ordering(694) 00:13:35.322 fused_ordering(695) 00:13:35.322 fused_ordering(696) 00:13:35.322 fused_ordering(697) 00:13:35.322 fused_ordering(698) 00:13:35.322 fused_ordering(699) 00:13:35.322 fused_ordering(700) 00:13:35.322 fused_ordering(701) 00:13:35.322 fused_ordering(702) 00:13:35.322 fused_ordering(703) 00:13:35.322 fused_ordering(704) 00:13:35.322 fused_ordering(705) 00:13:35.322 fused_ordering(706) 00:13:35.322 fused_ordering(707) 00:13:35.322 fused_ordering(708) 00:13:35.322 fused_ordering(709) 00:13:35.322 fused_ordering(710) 00:13:35.322 fused_ordering(711) 00:13:35.322 fused_ordering(712) 00:13:35.322 fused_ordering(713) 00:13:35.322 fused_ordering(714) 00:13:35.322 fused_ordering(715) 00:13:35.322 fused_ordering(716) 00:13:35.322 fused_ordering(717) 00:13:35.322 fused_ordering(718) 00:13:35.322 fused_ordering(719) 00:13:35.322 fused_ordering(720) 00:13:35.322 fused_ordering(721) 00:13:35.322 fused_ordering(722) 00:13:35.322 fused_ordering(723) 00:13:35.322 fused_ordering(724) 00:13:35.322 fused_ordering(725) 00:13:35.322 fused_ordering(726) 00:13:35.322 fused_ordering(727) 00:13:35.322 fused_ordering(728) 00:13:35.322 fused_ordering(729) 00:13:35.322 fused_ordering(730) 00:13:35.322 fused_ordering(731) 00:13:35.322 fused_ordering(732) 00:13:35.322 fused_ordering(733) 00:13:35.322 fused_ordering(734) 00:13:35.322 fused_ordering(735) 00:13:35.322 fused_ordering(736) 00:13:35.322 fused_ordering(737) 00:13:35.322 fused_ordering(738) 00:13:35.322 fused_ordering(739) 00:13:35.322 fused_ordering(740) 00:13:35.322 fused_ordering(741) 00:13:35.322 fused_ordering(742) 00:13:35.322 fused_ordering(743) 00:13:35.322 fused_ordering(744) 00:13:35.322 fused_ordering(745) 00:13:35.322 fused_ordering(746) 00:13:35.322 fused_ordering(747) 00:13:35.322 fused_ordering(748) 00:13:35.322 fused_ordering(749) 00:13:35.322 fused_ordering(750) 00:13:35.322 fused_ordering(751) 00:13:35.322 fused_ordering(752) 00:13:35.322 fused_ordering(753) 00:13:35.322 fused_ordering(754) 00:13:35.322 fused_ordering(755) 00:13:35.322 fused_ordering(756) 00:13:35.322 fused_ordering(757) 00:13:35.322 fused_ordering(758) 00:13:35.322 fused_ordering(759) 00:13:35.322 fused_ordering(760) 00:13:35.322 fused_ordering(761) 00:13:35.322 fused_ordering(762) 00:13:35.322 fused_ordering(763) 00:13:35.322 fused_ordering(764) 00:13:35.322 fused_ordering(765) 00:13:35.322 fused_ordering(766) 00:13:35.322 fused_ordering(767) 00:13:35.322 fused_ordering(768) 00:13:35.322 fused_ordering(769) 00:13:35.322 fused_ordering(770) 00:13:35.322 fused_ordering(771) 00:13:35.322 fused_ordering(772) 00:13:35.322 fused_ordering(773) 00:13:35.322 fused_ordering(774) 00:13:35.322 fused_ordering(775) 00:13:35.322 fused_ordering(776) 00:13:35.322 fused_ordering(777) 00:13:35.322 fused_ordering(778) 00:13:35.322 fused_ordering(779) 00:13:35.322 fused_ordering(780) 00:13:35.322 fused_ordering(781) 00:13:35.322 fused_ordering(782) 00:13:35.322 fused_ordering(783) 00:13:35.322 fused_ordering(784) 00:13:35.322 fused_ordering(785) 00:13:35.322 fused_ordering(786) 00:13:35.322 fused_ordering(787) 00:13:35.322 fused_ordering(788) 00:13:35.322 fused_ordering(789) 00:13:35.322 fused_ordering(790) 00:13:35.322 fused_ordering(791) 00:13:35.322 fused_ordering(792) 00:13:35.322 fused_ordering(793) 00:13:35.322 fused_ordering(794) 00:13:35.322 fused_ordering(795) 00:13:35.322 fused_ordering(796) 00:13:35.322 fused_ordering(797) 00:13:35.322 fused_ordering(798) 00:13:35.322 fused_ordering(799) 00:13:35.322 fused_ordering(800) 00:13:35.322 fused_ordering(801) 00:13:35.322 fused_ordering(802) 00:13:35.322 fused_ordering(803) 00:13:35.322 fused_ordering(804) 00:13:35.322 fused_ordering(805) 00:13:35.322 fused_ordering(806) 00:13:35.323 fused_ordering(807) 00:13:35.323 fused_ordering(808) 00:13:35.323 fused_ordering(809) 00:13:35.323 fused_ordering(810) 00:13:35.323 fused_ordering(811) 00:13:35.323 fused_ordering(812) 00:13:35.323 fused_ordering(813) 00:13:35.323 fused_ordering(814) 00:13:35.323 fused_ordering(815) 00:13:35.323 fused_ordering(816) 00:13:35.323 fused_ordering(817) 00:13:35.323 fused_ordering(818) 00:13:35.323 fused_ordering(819) 00:13:35.323 fused_ordering(820) 00:13:35.890 fused_ordering(821) 00:13:35.890 fused_ordering(822) 00:13:35.890 fused_ordering(823) 00:13:35.890 fused_ordering(824) 00:13:35.890 fused_ordering(825) 00:13:35.890 fused_ordering(826) 00:13:35.890 fused_ordering(827) 00:13:35.890 fused_ordering(828) 00:13:35.890 fused_ordering(829) 00:13:35.890 fused_ordering(830) 00:13:35.890 fused_ordering(831) 00:13:35.890 fused_ordering(832) 00:13:35.890 fused_ordering(833) 00:13:35.890 fused_ordering(834) 00:13:35.890 fused_ordering(835) 00:13:35.890 fused_ordering(836) 00:13:35.890 fused_ordering(837) 00:13:35.890 fused_ordering(838) 00:13:35.890 fused_ordering(839) 00:13:35.890 fused_ordering(840) 00:13:35.890 fused_ordering(841) 00:13:35.890 fused_ordering(842) 00:13:35.890 fused_ordering(843) 00:13:35.890 fused_ordering(844) 00:13:35.890 fused_ordering(845) 00:13:35.890 fused_ordering(846) 00:13:35.890 fused_ordering(847) 00:13:35.890 fused_ordering(848) 00:13:35.890 fused_ordering(849) 00:13:35.890 fused_ordering(850) 00:13:35.890 fused_ordering(851) 00:13:35.890 fused_ordering(852) 00:13:35.890 fused_ordering(853) 00:13:35.890 fused_ordering(854) 00:13:35.890 fused_ordering(855) 00:13:35.890 fused_ordering(856) 00:13:35.890 fused_ordering(857) 00:13:35.890 fused_ordering(858) 00:13:35.890 fused_ordering(859) 00:13:35.890 fused_ordering(860) 00:13:35.890 fused_ordering(861) 00:13:35.890 fused_ordering(862) 00:13:35.890 fused_ordering(863) 00:13:35.890 fused_ordering(864) 00:13:35.890 fused_ordering(865) 00:13:35.890 fused_ordering(866) 00:13:35.890 fused_ordering(867) 00:13:35.890 fused_ordering(868) 00:13:35.890 fused_ordering(869) 00:13:35.890 fused_ordering(870) 00:13:35.890 fused_ordering(871) 00:13:35.890 fused_ordering(872) 00:13:35.890 fused_ordering(873) 00:13:35.890 fused_ordering(874) 00:13:35.890 fused_ordering(875) 00:13:35.890 fused_ordering(876) 00:13:35.890 fused_ordering(877) 00:13:35.890 fused_ordering(878) 00:13:35.890 fused_ordering(879) 00:13:35.890 fused_ordering(880) 00:13:35.890 fused_ordering(881) 00:13:35.890 fused_ordering(882) 00:13:35.890 fused_ordering(883) 00:13:35.890 fused_ordering(884) 00:13:35.890 fused_ordering(885) 00:13:35.890 fused_ordering(886) 00:13:35.890 fused_ordering(887) 00:13:35.890 fused_ordering(888) 00:13:35.890 fused_ordering(889) 00:13:35.890 fused_ordering(890) 00:13:35.890 fused_ordering(891) 00:13:35.890 fused_ordering(892) 00:13:35.890 fused_ordering(893) 00:13:35.890 fused_ordering(894) 00:13:35.890 fused_ordering(895) 00:13:35.890 fused_ordering(896) 00:13:35.890 fused_ordering(897) 00:13:35.890 fused_ordering(898) 00:13:35.890 fused_ordering(899) 00:13:35.890 fused_ordering(900) 00:13:35.890 fused_ordering(901) 00:13:35.890 fused_ordering(902) 00:13:35.890 fused_ordering(903) 00:13:35.890 fused_ordering(904) 00:13:35.890 fused_ordering(905) 00:13:35.890 fused_ordering(906) 00:13:35.890 fused_ordering(907) 00:13:35.890 fused_ordering(908) 00:13:35.890 fused_ordering(909) 00:13:35.890 fused_ordering(910) 00:13:35.890 fused_ordering(911) 00:13:35.890 fused_ordering(912) 00:13:35.890 fused_ordering(913) 00:13:35.890 fused_ordering(914) 00:13:35.890 fused_ordering(915) 00:13:35.890 fused_ordering(916) 00:13:35.890 fused_ordering(917) 00:13:35.890 fused_ordering(918) 00:13:35.890 fused_ordering(919) 00:13:35.890 fused_ordering(920) 00:13:35.890 fused_ordering(921) 00:13:35.890 fused_ordering(922) 00:13:35.890 fused_ordering(923) 00:13:35.890 fused_ordering(924) 00:13:35.890 fused_ordering(925) 00:13:35.890 fused_ordering(926) 00:13:35.890 fused_ordering(927) 00:13:35.890 fused_ordering(928) 00:13:35.890 fused_ordering(929) 00:13:35.890 fused_ordering(930) 00:13:35.890 fused_ordering(931) 00:13:35.890 fused_ordering(932) 00:13:35.890 fused_ordering(933) 00:13:35.890 fused_ordering(934) 00:13:35.890 fused_ordering(935) 00:13:35.890 fused_ordering(936) 00:13:35.890 fused_ordering(937) 00:13:35.890 fused_ordering(938) 00:13:35.890 fused_ordering(939) 00:13:35.890 fused_ordering(940) 00:13:35.890 fused_ordering(941) 00:13:35.890 fused_ordering(942) 00:13:35.890 fused_ordering(943) 00:13:35.890 fused_ordering(944) 00:13:35.890 fused_ordering(945) 00:13:35.890 fused_ordering(946) 00:13:35.890 fused_ordering(947) 00:13:35.890 fused_ordering(948) 00:13:35.890 fused_ordering(949) 00:13:35.890 fused_ordering(950) 00:13:35.890 fused_ordering(951) 00:13:35.890 fused_ordering(952) 00:13:35.890 fused_ordering(953) 00:13:35.890 fused_ordering(954) 00:13:35.890 fused_ordering(955) 00:13:35.890 fused_ordering(956) 00:13:35.890 fused_ordering(957) 00:13:35.890 fused_ordering(958) 00:13:35.890 fused_ordering(959) 00:13:35.890 fused_ordering(960) 00:13:35.890 fused_ordering(961) 00:13:35.890 fused_ordering(962) 00:13:35.890 fused_ordering(963) 00:13:35.890 fused_ordering(964) 00:13:35.890 fused_ordering(965) 00:13:35.890 fused_ordering(966) 00:13:35.890 fused_ordering(967) 00:13:35.890 fused_ordering(968) 00:13:35.890 fused_ordering(969) 00:13:35.890 fused_ordering(970) 00:13:35.890 fused_ordering(971) 00:13:35.890 fused_ordering(972) 00:13:35.890 fused_ordering(973) 00:13:35.890 fused_ordering(974) 00:13:35.890 fused_ordering(975) 00:13:35.890 fused_ordering(976) 00:13:35.890 fused_ordering(977) 00:13:35.890 fused_ordering(978) 00:13:35.890 fused_ordering(979) 00:13:35.890 fused_ordering(980) 00:13:35.890 fused_ordering(981) 00:13:35.890 fused_ordering(982) 00:13:35.890 fused_ordering(983) 00:13:35.890 fused_ordering(984) 00:13:35.890 fused_ordering(985) 00:13:35.890 fused_ordering(986) 00:13:35.890 fused_ordering(987) 00:13:35.890 fused_ordering(988) 00:13:35.890 fused_ordering(989) 00:13:35.890 fused_ordering(990) 00:13:35.890 fused_ordering(991) 00:13:35.890 fused_ordering(992) 00:13:35.890 fused_ordering(993) 00:13:35.890 fused_ordering(994) 00:13:35.890 fused_ordering(995) 00:13:35.890 fused_ordering(996) 00:13:35.890 fused_ordering(997) 00:13:35.890 fused_ordering(998) 00:13:35.890 fused_ordering(999) 00:13:35.890 fused_ordering(1000) 00:13:35.890 fused_ordering(1001) 00:13:35.890 fused_ordering(1002) 00:13:35.890 fused_ordering(1003) 00:13:35.890 fused_ordering(1004) 00:13:35.890 fused_ordering(1005) 00:13:35.890 fused_ordering(1006) 00:13:35.890 fused_ordering(1007) 00:13:35.890 fused_ordering(1008) 00:13:35.890 fused_ordering(1009) 00:13:35.890 fused_ordering(1010) 00:13:35.890 fused_ordering(1011) 00:13:35.890 fused_ordering(1012) 00:13:35.890 fused_ordering(1013) 00:13:35.890 fused_ordering(1014) 00:13:35.890 fused_ordering(1015) 00:13:35.890 fused_ordering(1016) 00:13:35.891 fused_ordering(1017) 00:13:35.891 fused_ordering(1018) 00:13:35.891 fused_ordering(1019) 00:13:35.891 fused_ordering(1020) 00:13:35.891 fused_ordering(1021) 00:13:35.891 fused_ordering(1022) 00:13:35.891 fused_ordering(1023) 00:13:35.891 20:13:24 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:13:35.891 20:13:24 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:13:35.891 20:13:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:35.891 20:13:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:13:35.891 20:13:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:35.891 20:13:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:13:35.891 20:13:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:35.891 20:13:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:35.891 rmmod nvme_tcp 00:13:35.891 rmmod nvme_fabrics 00:13:35.891 rmmod nvme_keyring 00:13:35.891 20:13:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:35.891 20:13:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:13:35.891 20:13:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:13:35.891 20:13:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 87249 ']' 00:13:35.891 20:13:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 87249 00:13:35.891 20:13:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@946 -- # '[' -z 87249 ']' 00:13:35.891 20:13:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # kill -0 87249 00:13:35.891 20:13:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # uname 00:13:35.891 20:13:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:35.891 20:13:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 87249 00:13:35.891 killing process with pid 87249 00:13:35.891 20:13:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:13:35.891 20:13:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:13:35.891 20:13:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # echo 'killing process with pid 87249' 00:13:35.891 20:13:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@965 -- # kill 87249 00:13:35.891 20:13:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@970 -- # wait 87249 00:13:36.149 20:13:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:36.149 20:13:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:36.149 20:13:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:36.149 20:13:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:36.149 20:13:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:36.149 20:13:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:36.149 20:13:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:36.149 20:13:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:36.149 20:13:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:36.149 00:13:36.149 real 0m4.000s 00:13:36.149 user 0m4.578s 00:13:36.149 sys 0m1.467s 00:13:36.149 20:13:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:36.149 20:13:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:36.149 ************************************ 00:13:36.149 END TEST nvmf_fused_ordering 00:13:36.149 ************************************ 00:13:36.409 20:13:25 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:13:36.409 20:13:25 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:36.409 20:13:25 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:36.409 20:13:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:36.409 ************************************ 00:13:36.409 START TEST nvmf_delete_subsystem 00:13:36.409 ************************************ 00:13:36.409 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:13:36.409 * Looking for test storage... 00:13:36.409 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:36.409 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:36.409 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:13:36.409 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:36.409 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:36.409 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:36.409 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:36.409 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:36.409 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:36.409 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:36.409 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:36.409 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:36.409 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:36.409 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:13:36.409 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:13:36.409 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:36.409 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:36.409 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:36.409 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:36.409 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:36.409 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:36.409 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:36.409 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:36.410 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.410 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.410 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.410 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:13:36.410 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.410 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:13:36.410 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:36.410 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:36.410 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:36.410 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:36.410 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:36.410 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:36.410 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:36.410 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:36.410 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:13:36.410 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:36.410 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:36.410 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:36.410 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:36.410 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:36.410 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:36.410 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:36.410 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:36.410 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:36.410 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:36.410 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:36.410 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:36.410 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:36.410 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:36.410 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:36.410 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:36.410 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:36.410 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:36.410 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:36.410 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:36.410 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:36.410 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:36.410 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:36.410 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:36.410 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:36.410 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:36.410 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:36.410 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:36.410 Cannot find device "nvmf_tgt_br" 00:13:36.410 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # true 00:13:36.410 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:36.410 Cannot find device "nvmf_tgt_br2" 00:13:36.410 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # true 00:13:36.410 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:36.410 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:36.410 Cannot find device "nvmf_tgt_br" 00:13:36.410 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # true 00:13:36.410 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:36.410 Cannot find device "nvmf_tgt_br2" 00:13:36.410 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # true 00:13:36.410 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:36.670 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:36.670 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:36.670 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:36.670 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # true 00:13:36.670 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:36.670 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:36.670 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # true 00:13:36.670 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:36.670 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:36.670 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:36.670 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:36.670 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:36.670 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:36.670 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:36.670 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:36.670 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:36.670 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:36.670 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:36.670 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:36.670 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:36.670 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:36.670 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:36.670 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:36.670 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:36.670 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:36.670 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:36.670 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:36.670 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:36.670 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:36.670 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:36.670 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:36.670 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:36.670 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:13:36.670 00:13:36.670 --- 10.0.0.2 ping statistics --- 00:13:36.670 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:36.670 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:13:36.670 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:36.670 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:36.670 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:13:36.670 00:13:36.670 --- 10.0.0.3 ping statistics --- 00:13:36.670 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:36.670 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:13:36.670 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:36.670 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:36.670 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:13:36.670 00:13:36.670 --- 10.0.0.1 ping statistics --- 00:13:36.670 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:36.670 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:13:36.670 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:36.670 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@433 -- # return 0 00:13:36.670 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:36.670 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:36.670 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:36.670 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:36.670 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:36.670 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:36.670 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:36.930 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:13:36.930 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:36.930 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:36.930 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:36.930 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=87503 00:13:36.930 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:13:36.930 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 87503 00:13:36.930 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@827 -- # '[' -z 87503 ']' 00:13:36.930 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:36.930 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:36.930 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:36.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:36.930 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:36.930 20:13:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:36.930 [2024-07-14 20:13:25.827608] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:13:36.930 [2024-07-14 20:13:25.827707] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:36.930 [2024-07-14 20:13:25.967555] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:37.188 [2024-07-14 20:13:26.072237] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:37.188 [2024-07-14 20:13:26.072298] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:37.188 [2024-07-14 20:13:26.072312] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:37.188 [2024-07-14 20:13:26.072323] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:37.188 [2024-07-14 20:13:26.072333] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:37.188 [2024-07-14 20:13:26.073003] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:37.188 [2024-07-14 20:13:26.073021] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:37.756 20:13:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:37.756 20:13:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # return 0 00:13:37.756 20:13:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:37.756 20:13:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:37.756 20:13:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:38.015 20:13:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:38.015 20:13:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:38.015 20:13:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:38.015 20:13:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:38.015 [2024-07-14 20:13:26.880304] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:38.015 20:13:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:38.015 20:13:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:38.015 20:13:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:38.015 20:13:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:38.015 20:13:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:38.015 20:13:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:38.015 20:13:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:38.015 20:13:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:38.015 [2024-07-14 20:13:26.897202] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:38.015 20:13:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:38.015 20:13:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:38.015 20:13:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:38.015 20:13:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:38.015 NULL1 00:13:38.015 20:13:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:38.015 20:13:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:38.015 20:13:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:38.015 20:13:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:38.015 Delay0 00:13:38.015 20:13:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:38.015 20:13:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:38.015 20:13:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:38.015 20:13:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:38.015 20:13:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:38.015 20:13:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=87554 00:13:38.015 20:13:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:13:38.015 20:13:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:13:38.015 [2024-07-14 20:13:27.091034] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:13:39.917 20:13:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:39.917 20:13:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:39.917 20:13:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:40.175 Read completed with error (sct=0, sc=8) 00:13:40.175 starting I/O failed: -6 00:13:40.175 Read completed with error (sct=0, sc=8) 00:13:40.175 Read completed with error (sct=0, sc=8) 00:13:40.175 Write completed with error (sct=0, sc=8) 00:13:40.175 Write completed with error (sct=0, sc=8) 00:13:40.175 starting I/O failed: -6 00:13:40.175 Write completed with error (sct=0, sc=8) 00:13:40.175 Write completed with error (sct=0, sc=8) 00:13:40.175 Read completed with error (sct=0, sc=8) 00:13:40.175 Read completed with error (sct=0, sc=8) 00:13:40.175 starting I/O failed: -6 00:13:40.175 Write completed with error (sct=0, sc=8) 00:13:40.175 Read completed with error (sct=0, sc=8) 00:13:40.175 Write completed with error (sct=0, sc=8) 00:13:40.175 Read completed with error (sct=0, sc=8) 00:13:40.175 starting I/O failed: -6 00:13:40.175 Write completed with error (sct=0, sc=8) 00:13:40.175 Read completed with error (sct=0, sc=8) 00:13:40.175 Write completed with error (sct=0, sc=8) 00:13:40.175 Write completed with error (sct=0, sc=8) 00:13:40.175 starting I/O failed: -6 00:13:40.175 Read completed with error (sct=0, sc=8) 00:13:40.175 Read completed with error (sct=0, sc=8) 00:13:40.175 Read completed with error (sct=0, sc=8) 00:13:40.175 Read completed with error (sct=0, sc=8) 00:13:40.175 starting I/O failed: -6 00:13:40.175 Write completed with error (sct=0, sc=8) 00:13:40.175 Read completed with error (sct=0, sc=8) 00:13:40.175 Read completed with error (sct=0, sc=8) 00:13:40.175 Read completed with error (sct=0, sc=8) 00:13:40.175 starting I/O failed: -6 00:13:40.175 Read completed with error (sct=0, sc=8) 00:13:40.175 Read completed with error (sct=0, sc=8) 00:13:40.175 Read completed with error (sct=0, sc=8) 00:13:40.175 Write completed with error (sct=0, sc=8) 00:13:40.175 starting I/O failed: -6 00:13:40.175 Write completed with error (sct=0, sc=8) 00:13:40.175 Read completed with error (sct=0, sc=8) 00:13:40.175 Read completed with error (sct=0, sc=8) 00:13:40.175 Read completed with error (sct=0, sc=8) 00:13:40.175 starting I/O failed: -6 00:13:40.175 Read completed with error (sct=0, sc=8) 00:13:40.175 Read completed with error (sct=0, sc=8) 00:13:40.175 Read completed with error (sct=0, sc=8) 00:13:40.175 Read completed with error (sct=0, sc=8) 00:13:40.175 starting I/O failed: -6 00:13:40.175 Read completed with error (sct=0, sc=8) 00:13:40.175 Write completed with error (sct=0, sc=8) 00:13:40.175 Read completed with error (sct=0, sc=8) 00:13:40.175 Write completed with error (sct=0, sc=8) 00:13:40.175 starting I/O failed: -6 00:13:40.175 Read completed with error (sct=0, sc=8) 00:13:40.175 Write completed with error (sct=0, sc=8) 00:13:40.175 Write completed with error (sct=0, sc=8) 00:13:40.175 Read completed with error (sct=0, sc=8) 00:13:40.175 starting I/O failed: -6 00:13:40.175 Read completed with error (sct=0, sc=8) 00:13:40.175 [2024-07-14 20:13:29.126723] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c8e0 is same with the state(5) to be set 00:13:40.175 Read completed with error (sct=0, sc=8) 00:13:40.175 Read completed with error (sct=0, sc=8) 00:13:40.175 Read completed with error (sct=0, sc=8) 00:13:40.175 Read completed with error (sct=0, sc=8) 00:13:40.175 Read completed with error (sct=0, sc=8) 00:13:40.175 Write completed with error (sct=0, sc=8) 00:13:40.175 Read completed with error (sct=0, sc=8) 00:13:40.175 Write completed with error (sct=0, sc=8) 00:13:40.175 Write completed with error (sct=0, sc=8) 00:13:40.175 Write completed with error (sct=0, sc=8) 00:13:40.175 Write completed with error (sct=0, sc=8) 00:13:40.175 Write completed with error (sct=0, sc=8) 00:13:40.175 Write completed with error (sct=0, sc=8) 00:13:40.175 Read completed with error (sct=0, sc=8) 00:13:40.175 Write completed with error (sct=0, sc=8) 00:13:40.175 Read completed with error (sct=0, sc=8) 00:13:40.175 Write completed with error (sct=0, sc=8) 00:13:40.175 Read completed with error (sct=0, sc=8) 00:13:40.175 Read completed with error (sct=0, sc=8) 00:13:40.175 Write completed with error (sct=0, sc=8) 00:13:40.175 Read completed with error (sct=0, sc=8) 00:13:40.175 Read completed with error (sct=0, sc=8) 00:13:40.175 Write completed with error (sct=0, sc=8) 00:13:40.175 Write completed with error (sct=0, sc=8) 00:13:40.175 Read completed with error (sct=0, sc=8) 00:13:40.175 Read completed with error (sct=0, sc=8) 00:13:40.175 Write completed with error (sct=0, sc=8) 00:13:40.175 Read completed with error (sct=0, sc=8) 00:13:40.175 Read completed with error (sct=0, sc=8) 00:13:40.175 Read completed with error (sct=0, sc=8) 00:13:40.175 Read completed with error (sct=0, sc=8) 00:13:40.175 Write completed with error (sct=0, sc=8) 00:13:40.175 Read completed with error (sct=0, sc=8) 00:13:40.175 Read completed with error (sct=0, sc=8) 00:13:40.175 Write completed with error (sct=0, sc=8) 00:13:40.175 Write completed with error (sct=0, sc=8) 00:13:40.175 Read completed with error (sct=0, sc=8) 00:13:40.175 Read completed with error (sct=0, sc=8) 00:13:40.175 Read completed with error (sct=0, sc=8) 00:13:40.175 Write completed with error (sct=0, sc=8) 00:13:40.175 Read completed with error (sct=0, sc=8) 00:13:40.175 Write completed with error (sct=0, sc=8) 00:13:40.175 Read completed with error (sct=0, sc=8) 00:13:40.175 Read completed with error (sct=0, sc=8) 00:13:40.175 Write completed with error (sct=0, sc=8) 00:13:40.175 Read completed with error (sct=0, sc=8) 00:13:40.175 Read completed with error (sct=0, sc=8) 00:13:40.175 Write completed with error (sct=0, sc=8) 00:13:40.175 Read completed with error (sct=0, sc=8) 00:13:40.175 Read completed with error (sct=0, sc=8) 00:13:40.175 Read completed with error (sct=0, sc=8) 00:13:40.175 Write completed with error (sct=0, sc=8) 00:13:40.175 Read completed with error (sct=0, sc=8) 00:13:40.175 Write completed with error (sct=0, sc=8) 00:13:40.175 Read completed with error (sct=0, sc=8) 00:13:40.175 Read completed with error (sct=0, sc=8) 00:13:40.175 Read completed with error (sct=0, sc=8) 00:13:40.175 Read completed with error (sct=0, sc=8) 00:13:40.175 Read completed with error (sct=0, sc=8) 00:13:40.175 Read completed with error (sct=0, sc=8) 00:13:40.175 Write completed with error (sct=0, sc=8) 00:13:40.175 starting I/O failed: -6 00:13:40.175 Read completed with error (sct=0, sc=8) 00:13:40.175 Read completed with error (sct=0, sc=8) 00:13:40.175 Read completed with error (sct=0, sc=8) 00:13:40.175 Read completed with error (sct=0, sc=8) 00:13:40.175 starting I/O failed: -6 00:13:40.175 Read completed with error (sct=0, sc=8) 00:13:40.175 Read completed with error (sct=0, sc=8) 00:13:40.175 Write completed with error (sct=0, sc=8) 00:13:40.175 Read completed with error (sct=0, sc=8) 00:13:40.175 starting I/O failed: -6 00:13:40.175 Read completed with error (sct=0, sc=8) 00:13:40.175 Write completed with error (sct=0, sc=8) 00:13:40.175 Write completed with error (sct=0, sc=8) 00:13:40.175 Read completed with error (sct=0, sc=8) 00:13:40.175 starting I/O failed: -6 00:13:40.175 Read completed with error (sct=0, sc=8) 00:13:40.175 Read completed with error (sct=0, sc=8) 00:13:40.175 Write completed with error (sct=0, sc=8) 00:13:40.175 Write completed with error (sct=0, sc=8) 00:13:40.175 starting I/O failed: -6 00:13:40.175 Read completed with error (sct=0, sc=8) 00:13:40.176 Read completed with error (sct=0, sc=8) 00:13:40.176 Read completed with error (sct=0, sc=8) 00:13:40.176 Read completed with error (sct=0, sc=8) 00:13:40.176 starting I/O failed: -6 00:13:40.176 Write completed with error (sct=0, sc=8) 00:13:40.176 Write completed with error (sct=0, sc=8) 00:13:40.176 Write completed with error (sct=0, sc=8) 00:13:40.176 Read completed with error (sct=0, sc=8) 00:13:40.176 starting I/O failed: -6 00:13:40.176 Read completed with error (sct=0, sc=8) 00:13:40.176 Read completed with error (sct=0, sc=8) 00:13:40.176 Write completed with error (sct=0, sc=8) 00:13:40.176 Write completed with error (sct=0, sc=8) 00:13:40.176 starting I/O failed: -6 00:13:40.176 Read completed with error (sct=0, sc=8) 00:13:40.176 Read completed with error (sct=0, sc=8) 00:13:40.176 Read completed with error (sct=0, sc=8) 00:13:40.176 Write completed with error (sct=0, sc=8) 00:13:40.176 starting I/O failed: -6 00:13:40.176 Read completed with error (sct=0, sc=8) 00:13:40.176 Read completed with error (sct=0, sc=8) 00:13:40.176 Read completed with error (sct=0, sc=8) 00:13:40.176 Read completed with error (sct=0, sc=8) 00:13:40.176 starting I/O failed: -6 00:13:40.176 Write completed with error (sct=0, sc=8) 00:13:40.176 Write completed with error (sct=0, sc=8) 00:13:40.176 Write completed with error (sct=0, sc=8) 00:13:40.176 Read completed with error (sct=0, sc=8) 00:13:40.176 starting I/O failed: -6 00:13:40.176 Write completed with error (sct=0, sc=8) 00:13:40.176 Read completed with error (sct=0, sc=8) 00:13:40.176 Read completed with error (sct=0, sc=8) 00:13:40.176 [2024-07-14 20:13:29.128182] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f438c00c2f0 is same with the state(5) to be set 00:13:40.176 Read completed with error (sct=0, sc=8) 00:13:40.176 Read completed with error (sct=0, sc=8) 00:13:40.176 Write completed with error (sct=0, sc=8) 00:13:40.176 Read completed with error (sct=0, sc=8) 00:13:40.176 Read completed with error (sct=0, sc=8) 00:13:40.176 Read completed with error (sct=0, sc=8) 00:13:40.176 Read completed with error (sct=0, sc=8) 00:13:40.176 Read completed with error (sct=0, sc=8) 00:13:40.176 Read completed with error (sct=0, sc=8) 00:13:40.176 Read completed with error (sct=0, sc=8) 00:13:40.176 Write completed with error (sct=0, sc=8) 00:13:40.176 Read completed with error (sct=0, sc=8) 00:13:40.176 Write completed with error (sct=0, sc=8) 00:13:40.176 Write completed with error (sct=0, sc=8) 00:13:40.176 Read completed with error (sct=0, sc=8) 00:13:40.176 Write completed with error (sct=0, sc=8) 00:13:40.176 Read completed with error (sct=0, sc=8) 00:13:40.176 Write completed with error (sct=0, sc=8) 00:13:40.176 Read completed with error (sct=0, sc=8) 00:13:40.176 Read completed with error (sct=0, sc=8) 00:13:40.176 Read completed with error (sct=0, sc=8) 00:13:40.176 Write completed with error (sct=0, sc=8) 00:13:40.176 Write completed with error (sct=0, sc=8) 00:13:40.176 Write completed with error (sct=0, sc=8) 00:13:40.176 Read completed with error (sct=0, sc=8) 00:13:40.176 Write completed with error (sct=0, sc=8) 00:13:40.176 Read completed with error (sct=0, sc=8) 00:13:40.176 Read completed with error (sct=0, sc=8) 00:13:40.176 Read completed with error (sct=0, sc=8) 00:13:40.176 Read completed with error (sct=0, sc=8) 00:13:40.176 Read completed with error (sct=0, sc=8) 00:13:40.176 Read completed with error (sct=0, sc=8) 00:13:40.176 Read completed with error (sct=0, sc=8) 00:13:40.176 Write completed with error (sct=0, sc=8) 00:13:40.176 Read completed with error (sct=0, sc=8) 00:13:40.176 Read completed with error (sct=0, sc=8) 00:13:40.176 Read completed with error (sct=0, sc=8) 00:13:40.176 Write completed with error (sct=0, sc=8) 00:13:40.176 Write completed with error (sct=0, sc=8) 00:13:40.176 Read completed with error (sct=0, sc=8) 00:13:40.176 Read completed with error (sct=0, sc=8) 00:13:40.176 Read completed with error (sct=0, sc=8) 00:13:40.176 Write completed with error (sct=0, sc=8) 00:13:40.176 Write completed with error (sct=0, sc=8) 00:13:40.176 Write completed with error (sct=0, sc=8) 00:13:40.176 Read completed with error (sct=0, sc=8) 00:13:40.176 Write completed with error (sct=0, sc=8) 00:13:40.176 Read completed with error (sct=0, sc=8) 00:13:40.176 Write completed with error (sct=0, sc=8) 00:13:40.176 Read completed with error (sct=0, sc=8) 00:13:40.176 Read completed with error (sct=0, sc=8) 00:13:40.176 Read completed with error (sct=0, sc=8) 00:13:40.176 Write completed with error (sct=0, sc=8) 00:13:40.176 Read completed with error (sct=0, sc=8) 00:13:40.176 Write completed with error (sct=0, sc=8) 00:13:40.176 Read completed with error (sct=0, sc=8) 00:13:40.176 Read completed with error (sct=0, sc=8) 00:13:40.176 Read completed with error (sct=0, sc=8) 00:13:41.123 [2024-07-14 20:13:30.105575] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175a6c0 is same with the state(5) to be set 00:13:41.123 Read completed with error (sct=0, sc=8) 00:13:41.123 Read completed with error (sct=0, sc=8) 00:13:41.123 Read completed with error (sct=0, sc=8) 00:13:41.123 Write completed with error (sct=0, sc=8) 00:13:41.123 Read completed with error (sct=0, sc=8) 00:13:41.123 Read completed with error (sct=0, sc=8) 00:13:41.123 Read completed with error (sct=0, sc=8) 00:13:41.123 Read completed with error (sct=0, sc=8) 00:13:41.123 Read completed with error (sct=0, sc=8) 00:13:41.123 Write completed with error (sct=0, sc=8) 00:13:41.123 Write completed with error (sct=0, sc=8) 00:13:41.123 Read completed with error (sct=0, sc=8) 00:13:41.123 Read completed with error (sct=0, sc=8) 00:13:41.123 Write completed with error (sct=0, sc=8) 00:13:41.123 Read completed with error (sct=0, sc=8) 00:13:41.123 Read completed with error (sct=0, sc=8) 00:13:41.123 Read completed with error (sct=0, sc=8) 00:13:41.123 Write completed with error (sct=0, sc=8) 00:13:41.123 Read completed with error (sct=0, sc=8) 00:13:41.123 Read completed with error (sct=0, sc=8) 00:13:41.123 Read completed with error (sct=0, sc=8) 00:13:41.123 Read completed with error (sct=0, sc=8) 00:13:41.123 Read completed with error (sct=0, sc=8) 00:13:41.123 Write completed with error (sct=0, sc=8) 00:13:41.123 Read completed with error (sct=0, sc=8) 00:13:41.123 Write completed with error (sct=0, sc=8) 00:13:41.123 [2024-07-14 20:13:30.127036] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f438c00bfe0 is same with the state(5) to be set 00:13:41.123 Write completed with error (sct=0, sc=8) 00:13:41.123 Write completed with error (sct=0, sc=8) 00:13:41.123 Read completed with error (sct=0, sc=8) 00:13:41.123 Read completed with error (sct=0, sc=8) 00:13:41.123 Write completed with error (sct=0, sc=8) 00:13:41.123 Write completed with error (sct=0, sc=8) 00:13:41.123 Write completed with error (sct=0, sc=8) 00:13:41.123 Read completed with error (sct=0, sc=8) 00:13:41.123 Write completed with error (sct=0, sc=8) 00:13:41.123 Read completed with error (sct=0, sc=8) 00:13:41.123 Read completed with error (sct=0, sc=8) 00:13:41.123 Read completed with error (sct=0, sc=8) 00:13:41.123 Read completed with error (sct=0, sc=8) 00:13:41.123 Read completed with error (sct=0, sc=8) 00:13:41.123 Write completed with error (sct=0, sc=8) 00:13:41.124 Read completed with error (sct=0, sc=8) 00:13:41.124 Write completed with error (sct=0, sc=8) 00:13:41.124 Read completed with error (sct=0, sc=8) 00:13:41.124 Read completed with error (sct=0, sc=8) 00:13:41.124 Read completed with error (sct=0, sc=8) 00:13:41.124 Read completed with error (sct=0, sc=8) 00:13:41.124 Read completed with error (sct=0, sc=8) 00:13:41.124 Read completed with error (sct=0, sc=8) 00:13:41.124 Read completed with error (sct=0, sc=8) 00:13:41.124 Read completed with error (sct=0, sc=8) 00:13:41.124 [2024-07-14 20:13:30.127248] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f438c00c600 is same with the state(5) to be set 00:13:41.124 Initializing NVMe Controllers 00:13:41.124 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:41.124 Controller IO queue size 128, less than required. 00:13:41.124 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:41.124 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:13:41.124 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:13:41.124 Initialization complete. Launching workers. 00:13:41.124 ======================================================== 00:13:41.124 Latency(us) 00:13:41.124 Device Information : IOPS MiB/s Average min max 00:13:41.124 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 172.79 0.08 890391.46 432.23 1012175.55 00:13:41.124 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 172.29 0.08 890186.28 307.05 1012755.39 00:13:41.124 ======================================================== 00:13:41.124 Total : 345.08 0.17 890289.02 307.05 1012755.39 00:13:41.124 00:13:41.124 Read completed with error (sct=0, sc=8) 00:13:41.124 Read completed with error (sct=0, sc=8) 00:13:41.124 Read completed with error (sct=0, sc=8) 00:13:41.124 Read completed with error (sct=0, sc=8) 00:13:41.124 Read completed with error (sct=0, sc=8) 00:13:41.124 Write completed with error (sct=0, sc=8) 00:13:41.124 Read completed with error (sct=0, sc=8) 00:13:41.124 Read completed with error (sct=0, sc=8) 00:13:41.124 Write completed with error (sct=0, sc=8) 00:13:41.124 Read completed with error (sct=0, sc=8) 00:13:41.124 Read completed with error (sct=0, sc=8) 00:13:41.124 Write completed with error (sct=0, sc=8) 00:13:41.124 Read completed with error (sct=0, sc=8) 00:13:41.124 Read completed with error (sct=0, sc=8) 00:13:41.124 Write completed with error (sct=0, sc=8) 00:13:41.124 Write completed with error (sct=0, sc=8) 00:13:41.124 Read completed with error (sct=0, sc=8) 00:13:41.124 Write completed with error (sct=0, sc=8) 00:13:41.124 Write completed with error (sct=0, sc=8) 00:13:41.124 Read completed with error (sct=0, sc=8) 00:13:41.124 Read completed with error (sct=0, sc=8) 00:13:41.124 Read completed with error (sct=0, sc=8) 00:13:41.124 Read completed with error (sct=0, sc=8) 00:13:41.124 Read completed with error (sct=0, sc=8) 00:13:41.124 Read completed with error (sct=0, sc=8) 00:13:41.124 Write completed with error (sct=0, sc=8) 00:13:41.124 [2024-07-14 20:13:30.128255] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c700 is same with the state(5) to be set 00:13:41.124 Write completed with error (sct=0, sc=8) 00:13:41.124 Read completed with error (sct=0, sc=8) 00:13:41.124 Read completed with error (sct=0, sc=8) 00:13:41.124 Read completed with error (sct=0, sc=8) 00:13:41.124 Read completed with error (sct=0, sc=8) 00:13:41.124 Read completed with error (sct=0, sc=8) 00:13:41.124 Read completed with error (sct=0, sc=8) 00:13:41.124 Write completed with error (sct=0, sc=8) 00:13:41.124 Read completed with error (sct=0, sc=8) 00:13:41.124 Read completed with error (sct=0, sc=8) 00:13:41.124 Read completed with error (sct=0, sc=8) 00:13:41.124 Write completed with error (sct=0, sc=8) 00:13:41.124 Read completed with error (sct=0, sc=8) 00:13:41.124 Read completed with error (sct=0, sc=8) 00:13:41.124 Write completed with error (sct=0, sc=8) 00:13:41.124 Write completed with error (sct=0, sc=8) 00:13:41.124 Read completed with error (sct=0, sc=8) 00:13:41.124 Read completed with error (sct=0, sc=8) 00:13:41.124 Read completed with error (sct=0, sc=8) 00:13:41.124 Write completed with error (sct=0, sc=8) 00:13:41.124 Write completed with error (sct=0, sc=8) 00:13:41.124 Write completed with error (sct=0, sc=8) 00:13:41.124 Read completed with error (sct=0, sc=8) 00:13:41.124 Read completed with error (sct=0, sc=8) 00:13:41.124 Write completed with error (sct=0, sc=8) 00:13:41.124 Read completed with error (sct=0, sc=8) 00:13:41.124 [2024-07-14 20:13:30.128460] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173cac0 is same with the state(5) to be set 00:13:41.124 [2024-07-14 20:13:30.129670] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x175a6c0 (9): Bad file descriptor 00:13:41.124 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:13:41.124 20:13:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:41.124 20:13:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:13:41.124 20:13:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 87554 00:13:41.124 20:13:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:13:41.691 20:13:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:13:41.691 20:13:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 87554 00:13:41.691 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (87554) - No such process 00:13:41.691 20:13:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 87554 00:13:41.691 20:13:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:13:41.691 20:13:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 87554 00:13:41.691 20:13:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:13:41.691 20:13:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:41.691 20:13:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:13:41.691 20:13:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:41.691 20:13:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 87554 00:13:41.691 20:13:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:13:41.691 20:13:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:41.691 20:13:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:41.691 20:13:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:41.691 20:13:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:41.691 20:13:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:41.691 20:13:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:41.691 20:13:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:41.691 20:13:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:41.691 20:13:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:41.691 20:13:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:41.691 [2024-07-14 20:13:30.664847] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:41.691 20:13:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:41.691 20:13:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:41.691 20:13:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:41.691 20:13:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:41.691 20:13:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:41.691 20:13:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=87606 00:13:41.691 20:13:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:13:41.691 20:13:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:13:41.691 20:13:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 87606 00:13:41.691 20:13:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:41.963 [2024-07-14 20:13:30.844518] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:13:42.247 20:13:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:42.247 20:13:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 87606 00:13:42.247 20:13:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:42.817 20:13:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:42.817 20:13:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 87606 00:13:42.817 20:13:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:43.383 20:13:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:43.383 20:13:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 87606 00:13:43.383 20:13:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:43.641 20:13:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:43.641 20:13:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 87606 00:13:43.641 20:13:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:44.208 20:13:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:44.208 20:13:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 87606 00:13:44.208 20:13:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:44.774 20:13:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:44.774 20:13:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 87606 00:13:44.774 20:13:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:45.031 Initializing NVMe Controllers 00:13:45.032 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:45.032 Controller IO queue size 128, less than required. 00:13:45.032 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:45.032 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:13:45.032 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:13:45.032 Initialization complete. Launching workers. 00:13:45.032 ======================================================== 00:13:45.032 Latency(us) 00:13:45.032 Device Information : IOPS MiB/s Average min max 00:13:45.032 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002769.47 1000154.55 1010028.71 00:13:45.032 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005018.24 1000270.47 1012370.05 00:13:45.032 ======================================================== 00:13:45.032 Total : 256.00 0.12 1003893.86 1000154.55 1012370.05 00:13:45.032 00:13:45.290 20:13:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:45.290 20:13:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 87606 00:13:45.290 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (87606) - No such process 00:13:45.290 20:13:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 87606 00:13:45.290 20:13:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:13:45.290 20:13:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:13:45.290 20:13:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:45.290 20:13:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:13:45.290 20:13:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:45.290 20:13:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:13:45.290 20:13:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:45.290 20:13:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:45.290 rmmod nvme_tcp 00:13:45.290 rmmod nvme_fabrics 00:13:45.290 rmmod nvme_keyring 00:13:45.290 20:13:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:45.290 20:13:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:13:45.290 20:13:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:13:45.290 20:13:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 87503 ']' 00:13:45.290 20:13:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 87503 00:13:45.290 20:13:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@946 -- # '[' -z 87503 ']' 00:13:45.290 20:13:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # kill -0 87503 00:13:45.290 20:13:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # uname 00:13:45.290 20:13:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:45.290 20:13:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 87503 00:13:45.290 killing process with pid 87503 00:13:45.290 20:13:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:45.290 20:13:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:45.290 20:13:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # echo 'killing process with pid 87503' 00:13:45.290 20:13:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@965 -- # kill 87503 00:13:45.290 20:13:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # wait 87503 00:13:45.550 20:13:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:45.550 20:13:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:45.550 20:13:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:45.550 20:13:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:45.550 20:13:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:45.550 20:13:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:45.550 20:13:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:45.550 20:13:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:45.809 20:13:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:45.809 00:13:45.809 real 0m9.381s 00:13:45.809 user 0m28.949s 00:13:45.809 sys 0m1.380s 00:13:45.809 20:13:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:45.809 20:13:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:45.809 ************************************ 00:13:45.809 END TEST nvmf_delete_subsystem 00:13:45.809 ************************************ 00:13:45.809 20:13:34 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:13:45.809 20:13:34 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:45.809 20:13:34 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:45.809 20:13:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:45.809 ************************************ 00:13:45.809 START TEST nvmf_ns_masking 00:13:45.809 ************************************ 00:13:45.809 20:13:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1121 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:13:45.809 * Looking for test storage... 00:13:45.809 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:45.809 20:13:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:45.809 20:13:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:13:45.809 20:13:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:45.809 20:13:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:45.809 20:13:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:45.809 20:13:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:45.809 20:13:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:45.809 20:13:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:45.809 20:13:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:45.809 20:13:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:45.809 20:13:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:45.809 20:13:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:45.809 20:13:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:13:45.809 20:13:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:13:45.809 20:13:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:45.809 20:13:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:45.809 20:13:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:45.809 20:13:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:45.809 20:13:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:45.809 20:13:34 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:45.809 20:13:34 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:45.809 20:13:34 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:45.809 20:13:34 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.809 20:13:34 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.810 20:13:34 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.810 20:13:34 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:13:45.810 20:13:34 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.810 20:13:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:13:45.810 20:13:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:45.810 20:13:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:45.810 20:13:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:45.810 20:13:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:45.810 20:13:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:45.810 20:13:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:45.810 20:13:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:45.810 20:13:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:45.810 20:13:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:45.810 20:13:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # loops=5 00:13:45.810 20:13:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:13:45.810 20:13:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:13:45.810 20:13:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # uuidgen 00:13:45.810 20:13:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # HOSTID=9c6cc34a-5998-4e9f-9df9-b9d5eff55d96 00:13:45.810 20:13:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvmftestinit 00:13:45.810 20:13:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:45.810 20:13:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:45.810 20:13:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:45.810 20:13:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:45.810 20:13:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:45.810 20:13:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:45.810 20:13:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:45.810 20:13:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:45.810 20:13:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:45.810 20:13:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:45.810 20:13:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:45.810 20:13:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:45.810 20:13:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:45.810 20:13:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:45.810 20:13:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:45.810 20:13:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:45.810 20:13:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:45.810 20:13:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:45.810 20:13:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:45.810 20:13:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:45.810 20:13:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:45.810 20:13:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:45.810 20:13:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:45.810 20:13:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:45.810 20:13:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:45.810 20:13:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:45.810 20:13:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:45.810 20:13:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:45.810 Cannot find device "nvmf_tgt_br" 00:13:45.810 20:13:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@155 -- # true 00:13:45.810 20:13:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:45.810 Cannot find device "nvmf_tgt_br2" 00:13:45.810 20:13:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@156 -- # true 00:13:45.810 20:13:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:45.810 20:13:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:46.069 Cannot find device "nvmf_tgt_br" 00:13:46.069 20:13:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@158 -- # true 00:13:46.069 20:13:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:46.069 Cannot find device "nvmf_tgt_br2" 00:13:46.069 20:13:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@159 -- # true 00:13:46.069 20:13:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:46.069 20:13:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:46.069 20:13:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:46.069 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:46.069 20:13:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@162 -- # true 00:13:46.069 20:13:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:46.069 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:46.069 20:13:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@163 -- # true 00:13:46.069 20:13:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:46.069 20:13:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:46.069 20:13:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:46.069 20:13:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:46.069 20:13:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:46.069 20:13:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:46.069 20:13:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:46.069 20:13:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:46.069 20:13:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:46.069 20:13:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:46.069 20:13:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:46.069 20:13:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:46.069 20:13:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:46.069 20:13:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:46.069 20:13:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:46.069 20:13:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:46.069 20:13:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:46.069 20:13:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:46.069 20:13:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:46.069 20:13:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:46.070 20:13:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:46.329 20:13:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:46.329 20:13:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:46.329 20:13:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:46.329 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:46.329 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:13:46.329 00:13:46.329 --- 10.0.0.2 ping statistics --- 00:13:46.329 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:46.329 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:13:46.329 20:13:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:46.329 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:46.329 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:13:46.329 00:13:46.329 --- 10.0.0.3 ping statistics --- 00:13:46.329 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:46.329 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:13:46.329 20:13:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:46.329 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:46.329 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:13:46.329 00:13:46.329 --- 10.0.0.1 ping statistics --- 00:13:46.329 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:46.329 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:13:46.329 20:13:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:46.329 20:13:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@433 -- # return 0 00:13:46.329 20:13:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:46.329 20:13:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:46.329 20:13:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:46.329 20:13:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:46.329 20:13:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:46.329 20:13:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:46.329 20:13:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:46.329 20:13:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:13:46.329 20:13:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:46.329 20:13:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:46.329 20:13:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:46.329 20:13:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=87833 00:13:46.329 20:13:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 87833 00:13:46.329 20:13:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@827 -- # '[' -z 87833 ']' 00:13:46.329 20:13:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:46.329 20:13:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:46.329 20:13:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:46.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:46.329 20:13:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:46.329 20:13:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:46.329 20:13:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:46.329 [2024-07-14 20:13:35.265836] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:13:46.329 [2024-07-14 20:13:35.265975] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:46.329 [2024-07-14 20:13:35.409045] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:46.588 [2024-07-14 20:13:35.512382] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:46.588 [2024-07-14 20:13:35.512446] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:46.588 [2024-07-14 20:13:35.512460] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:46.588 [2024-07-14 20:13:35.512471] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:46.588 [2024-07-14 20:13:35.512481] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:46.588 [2024-07-14 20:13:35.512645] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:46.588 [2024-07-14 20:13:35.512795] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:46.588 [2024-07-14 20:13:35.513528] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:46.588 [2024-07-14 20:13:35.513561] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:47.525 20:13:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:47.525 20:13:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@860 -- # return 0 00:13:47.525 20:13:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:47.525 20:13:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:47.525 20:13:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:47.525 20:13:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:47.525 20:13:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:47.525 [2024-07-14 20:13:36.504278] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:47.525 20:13:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:13:47.525 20:13:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:13:47.525 20:13:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:47.784 Malloc1 00:13:47.784 20:13:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:48.043 Malloc2 00:13:48.043 20:13:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:48.302 20:13:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:13:48.560 20:13:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:48.819 [2024-07-14 20:13:37.714914] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:48.819 20:13:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@61 -- # connect 00:13:48.819 20:13:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 9c6cc34a-5998-4e9f-9df9-b9d5eff55d96 -a 10.0.0.2 -s 4420 -i 4 00:13:48.819 20:13:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:13:48.819 20:13:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:13:48.819 20:13:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:13:48.819 20:13:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:13:48.819 20:13:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:13:51.353 20:13:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:13:51.353 20:13:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:13:51.353 20:13:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:13:51.353 20:13:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:13:51.353 20:13:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:13:51.353 20:13:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:13:51.353 20:13:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:13:51.353 20:13:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:51.353 20:13:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:13:51.353 20:13:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:13:51.353 20:13:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:13:51.353 20:13:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:51.353 20:13:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:13:51.353 [ 0]:0x1 00:13:51.353 20:13:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:51.353 20:13:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:51.353 20:13:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=1233b7c87e904c6387c28e00f0f9614e 00:13:51.353 20:13:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 1233b7c87e904c6387c28e00f0f9614e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:51.353 20:13:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:13:51.353 20:13:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:13:51.353 20:13:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:51.353 20:13:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:13:51.353 [ 0]:0x1 00:13:51.353 20:13:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:51.353 20:13:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:51.353 20:13:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=1233b7c87e904c6387c28e00f0f9614e 00:13:51.353 20:13:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 1233b7c87e904c6387c28e00f0f9614e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:51.353 20:13:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:13:51.353 20:13:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:51.353 20:13:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:13:51.353 [ 1]:0x2 00:13:51.353 20:13:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:51.353 20:13:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:51.353 20:13:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=0548e95e5da94c77bb54df0294c7af54 00:13:51.353 20:13:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 0548e95e5da94c77bb54df0294c7af54 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:51.353 20:13:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@69 -- # disconnect 00:13:51.353 20:13:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:51.353 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:51.353 20:13:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:51.612 20:13:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:13:51.871 20:13:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@77 -- # connect 1 00:13:51.871 20:13:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 9c6cc34a-5998-4e9f-9df9-b9d5eff55d96 -a 10.0.0.2 -s 4420 -i 4 00:13:52.130 20:13:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:13:52.130 20:13:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:13:52.130 20:13:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:13:52.130 20:13:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 1 ]] 00:13:52.130 20:13:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=1 00:13:52.130 20:13:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:13:54.034 20:13:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:13:54.034 20:13:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:13:54.034 20:13:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:13:54.034 20:13:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:13:54.035 20:13:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:13:54.035 20:13:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:13:54.035 20:13:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:13:54.035 20:13:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:54.035 20:13:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:13:54.035 20:13:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:13:54.035 20:13:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:13:54.035 20:13:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:13:54.035 20:13:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:13:54.035 20:13:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:13:54.035 20:13:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:54.035 20:13:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:13:54.035 20:13:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:54.035 20:13:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:13:54.035 20:13:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:13:54.035 20:13:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:54.035 20:13:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:54.035 20:13:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:54.295 20:13:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:13:54.295 20:13:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:54.295 20:13:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:13:54.295 20:13:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:54.295 20:13:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:54.295 20:13:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:54.295 20:13:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:13:54.295 20:13:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:54.295 20:13:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:13:54.295 [ 0]:0x2 00:13:54.295 20:13:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:54.295 20:13:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:54.295 20:13:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=0548e95e5da94c77bb54df0294c7af54 00:13:54.295 20:13:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 0548e95e5da94c77bb54df0294c7af54 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:54.295 20:13:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:54.556 20:13:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:13:54.557 20:13:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:54.557 20:13:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:13:54.557 [ 0]:0x1 00:13:54.557 20:13:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:54.557 20:13:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:54.557 20:13:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=1233b7c87e904c6387c28e00f0f9614e 00:13:54.557 20:13:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 1233b7c87e904c6387c28e00f0f9614e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:54.557 20:13:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:13:54.557 20:13:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:54.557 20:13:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:13:54.557 [ 1]:0x2 00:13:54.557 20:13:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:54.557 20:13:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:54.557 20:13:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=0548e95e5da94c77bb54df0294c7af54 00:13:54.557 20:13:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 0548e95e5da94c77bb54df0294c7af54 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:54.557 20:13:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:54.814 20:13:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:13:54.814 20:13:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:13:54.814 20:13:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:13:54.814 20:13:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:13:54.814 20:13:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:54.814 20:13:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:13:54.814 20:13:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:54.814 20:13:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:13:54.814 20:13:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:54.814 20:13:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:13:54.814 20:13:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:54.814 20:13:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:55.073 20:13:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:13:55.073 20:13:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:55.073 20:13:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:13:55.073 20:13:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:55.073 20:13:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:55.073 20:13:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:55.073 20:13:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:13:55.073 20:13:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:13:55.073 20:13:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:55.073 [ 0]:0x2 00:13:55.073 20:13:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:55.073 20:13:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:55.073 20:13:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=0548e95e5da94c77bb54df0294c7af54 00:13:55.073 20:13:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 0548e95e5da94c77bb54df0294c7af54 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:55.073 20:13:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@91 -- # disconnect 00:13:55.073 20:13:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:55.073 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:55.073 20:13:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:55.331 20:13:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # connect 2 00:13:55.331 20:13:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 9c6cc34a-5998-4e9f-9df9-b9d5eff55d96 -a 10.0.0.2 -s 4420 -i 4 00:13:55.331 20:13:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:55.331 20:13:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:13:55.331 20:13:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:13:55.331 20:13:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:13:55.331 20:13:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:13:55.331 20:13:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:13:57.862 20:13:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:13:57.862 20:13:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:13:57.862 20:13:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:13:57.862 20:13:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:13:57.862 20:13:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:13:57.862 20:13:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:13:57.862 20:13:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:13:57.862 20:13:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:57.862 20:13:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:13:57.862 20:13:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:13:57.862 20:13:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:13:57.862 20:13:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:57.862 20:13:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:13:57.862 [ 0]:0x1 00:13:57.862 20:13:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:57.862 20:13:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:57.862 20:13:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=1233b7c87e904c6387c28e00f0f9614e 00:13:57.862 20:13:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 1233b7c87e904c6387c28e00f0f9614e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:57.862 20:13:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:13:57.862 20:13:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:57.862 20:13:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:13:57.862 [ 1]:0x2 00:13:57.862 20:13:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:57.862 20:13:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:57.862 20:13:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=0548e95e5da94c77bb54df0294c7af54 00:13:57.862 20:13:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 0548e95e5da94c77bb54df0294c7af54 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:57.862 20:13:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:57.862 20:13:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:13:57.862 20:13:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:13:57.862 20:13:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:13:57.862 20:13:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:13:57.862 20:13:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:57.862 20:13:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:13:57.862 20:13:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:57.862 20:13:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:13:57.862 20:13:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:57.862 20:13:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:13:57.862 20:13:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:57.862 20:13:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:57.862 20:13:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:13:57.862 20:13:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:57.862 20:13:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:13:57.862 20:13:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:57.862 20:13:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:57.862 20:13:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:57.862 20:13:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:13:57.862 20:13:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:57.862 20:13:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:13:57.862 [ 0]:0x2 00:13:57.862 20:13:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:57.862 20:13:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:57.862 20:13:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=0548e95e5da94c77bb54df0294c7af54 00:13:57.862 20:13:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 0548e95e5da94c77bb54df0294c7af54 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:57.862 20:13:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@105 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:57.862 20:13:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:13:57.862 20:13:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:57.862 20:13:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:57.862 20:13:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:57.862 20:13:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:57.862 20:13:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:57.862 20:13:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:57.862 20:13:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:57.862 20:13:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:57.862 20:13:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:13:57.862 20:13:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:58.120 [2024-07-14 20:13:47.115455] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:13:58.120 2024/07/14 20:13:47 error on JSON-RPC call, method: nvmf_ns_remove_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 nsid:2], err: error received for nvmf_ns_remove_host method, err: Code=-32602 Msg=Invalid parameters 00:13:58.120 request: 00:13:58.120 { 00:13:58.120 "method": "nvmf_ns_remove_host", 00:13:58.120 "params": { 00:13:58.120 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:58.120 "nsid": 2, 00:13:58.120 "host": "nqn.2016-06.io.spdk:host1" 00:13:58.120 } 00:13:58.120 } 00:13:58.120 Got JSON-RPC error response 00:13:58.120 GoRPCClient: error on JSON-RPC call 00:13:58.120 20:13:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:13:58.120 20:13:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:58.120 20:13:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:58.120 20:13:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:58.120 20:13:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:13:58.120 20:13:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:13:58.120 20:13:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:13:58.120 20:13:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:13:58.120 20:13:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:58.120 20:13:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:13:58.120 20:13:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:58.120 20:13:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:13:58.120 20:13:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:58.120 20:13:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:13:58.120 20:13:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:58.120 20:13:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:58.120 20:13:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:13:58.120 20:13:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:58.120 20:13:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:13:58.120 20:13:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:58.120 20:13:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:58.120 20:13:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:58.120 20:13:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:13:58.120 20:13:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:58.121 20:13:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:13:58.121 [ 0]:0x2 00:13:58.121 20:13:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:58.121 20:13:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:58.379 20:13:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=0548e95e5da94c77bb54df0294c7af54 00:13:58.379 20:13:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 0548e95e5da94c77bb54df0294c7af54 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:58.379 20:13:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # disconnect 00:13:58.379 20:13:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:58.379 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:58.379 20:13:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@110 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:58.637 20:13:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:13:58.637 20:13:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # nvmftestfini 00:13:58.637 20:13:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:58.637 20:13:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:13:58.637 20:13:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:58.637 20:13:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:13:58.637 20:13:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:58.637 20:13:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:58.637 rmmod nvme_tcp 00:13:58.637 rmmod nvme_fabrics 00:13:58.637 rmmod nvme_keyring 00:13:58.637 20:13:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:58.637 20:13:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:13:58.637 20:13:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:13:58.637 20:13:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 87833 ']' 00:13:58.637 20:13:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 87833 00:13:58.637 20:13:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@946 -- # '[' -z 87833 ']' 00:13:58.637 20:13:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@950 -- # kill -0 87833 00:13:58.637 20:13:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # uname 00:13:58.638 20:13:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:58.638 20:13:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 87833 00:13:58.638 20:13:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:58.638 20:13:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:58.638 killing process with pid 87833 00:13:58.638 20:13:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@964 -- # echo 'killing process with pid 87833' 00:13:58.638 20:13:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@965 -- # kill 87833 00:13:58.638 20:13:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@970 -- # wait 87833 00:13:58.896 20:13:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:58.896 20:13:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:58.896 20:13:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:58.896 20:13:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:58.896 20:13:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:58.896 20:13:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:58.896 20:13:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:58.896 20:13:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:59.155 20:13:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:59.155 00:13:59.155 real 0m13.293s 00:13:59.155 user 0m52.957s 00:13:59.155 sys 0m2.177s 00:13:59.155 20:13:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:59.155 20:13:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:59.155 ************************************ 00:13:59.155 END TEST nvmf_ns_masking 00:13:59.155 ************************************ 00:13:59.155 20:13:48 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 0 -eq 1 ]] 00:13:59.155 20:13:48 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 0 -eq 1 ]] 00:13:59.155 20:13:48 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:59.155 20:13:48 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:59.155 20:13:48 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:59.155 20:13:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:59.155 ************************************ 00:13:59.155 START TEST nvmf_host_management 00:13:59.155 ************************************ 00:13:59.155 20:13:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:59.155 * Looking for test storage... 00:13:59.155 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:59.155 20:13:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:59.155 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:13:59.155 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:59.155 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:59.155 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:59.155 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:59.155 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:59.155 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:59.155 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:59.155 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:59.155 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:59.155 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:59.155 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:13:59.155 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:13:59.155 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:59.155 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:59.155 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:59.155 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:59.155 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:59.155 20:13:48 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:59.155 20:13:48 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:59.155 20:13:48 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:59.155 20:13:48 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.155 20:13:48 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.155 20:13:48 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.155 20:13:48 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:13:59.155 20:13:48 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.155 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:13:59.155 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:59.155 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:59.155 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:59.155 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:59.155 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:59.155 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:59.155 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:59.155 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:59.155 20:13:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:59.155 20:13:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:59.155 20:13:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:13:59.155 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:59.155 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:59.155 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:59.155 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:59.155 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:59.155 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:59.155 20:13:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:59.155 20:13:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:59.155 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:59.155 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:59.155 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:59.155 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:59.155 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:59.155 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:59.155 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:59.155 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:59.155 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:59.155 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:59.155 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:59.155 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:59.155 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:59.155 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:59.155 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:59.155 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:59.155 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:59.155 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:59.155 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:59.155 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:59.155 Cannot find device "nvmf_tgt_br" 00:13:59.155 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # true 00:13:59.155 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:59.155 Cannot find device "nvmf_tgt_br2" 00:13:59.155 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # true 00:13:59.155 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:59.155 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:59.414 Cannot find device "nvmf_tgt_br" 00:13:59.414 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # true 00:13:59.414 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:59.414 Cannot find device "nvmf_tgt_br2" 00:13:59.414 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # true 00:13:59.414 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:59.414 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:59.414 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:59.414 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:59.414 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:13:59.414 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:59.414 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:59.414 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:13:59.415 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:59.415 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:59.415 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:59.415 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:59.415 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:59.415 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:59.415 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:59.415 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:59.415 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:59.415 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:59.415 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:59.415 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:59.415 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:59.415 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:59.415 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:59.415 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:59.415 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:59.415 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:59.415 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:59.415 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:59.674 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:59.674 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:59.674 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:59.674 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:59.674 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:59.674 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.092 ms 00:13:59.674 00:13:59.674 --- 10.0.0.2 ping statistics --- 00:13:59.674 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:59.674 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:13:59.674 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:59.674 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:59.674 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:13:59.674 00:13:59.674 --- 10.0.0.3 ping statistics --- 00:13:59.674 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:59.674 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:13:59.674 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:59.674 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:59.674 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:13:59.674 00:13:59.674 --- 10.0.0.1 ping statistics --- 00:13:59.674 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:59.674 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:13:59.674 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:59.674 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@433 -- # return 0 00:13:59.674 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:59.674 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:59.674 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:59.674 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:59.674 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:59.674 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:59.674 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:59.674 20:13:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:13:59.674 20:13:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:13:59.674 20:13:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:13:59.674 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:59.674 20:13:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:59.674 20:13:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:59.674 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=88392 00:13:59.674 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 88392 00:13:59.674 20:13:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:13:59.674 20:13:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 88392 ']' 00:13:59.674 20:13:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:59.674 20:13:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:59.674 20:13:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:59.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:59.674 20:13:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:59.674 20:13:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:59.674 [2024-07-14 20:13:48.626067] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:13:59.674 [2024-07-14 20:13:48.626171] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:59.932 [2024-07-14 20:13:48.768236] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:59.932 [2024-07-14 20:13:48.877772] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:59.932 [2024-07-14 20:13:48.877826] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:59.932 [2024-07-14 20:13:48.877836] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:59.932 [2024-07-14 20:13:48.877844] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:59.932 [2024-07-14 20:13:48.877850] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:59.932 [2024-07-14 20:13:48.878032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:59.932 [2024-07-14 20:13:48.878894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:59.932 [2024-07-14 20:13:48.879027] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:13:59.932 [2024-07-14 20:13:48.879035] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:00.866 20:13:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:00.866 20:13:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:14:00.866 20:13:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:00.866 20:13:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:00.866 20:13:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:00.867 20:13:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:00.867 20:13:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:00.867 20:13:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.867 20:13:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:00.867 [2024-07-14 20:13:49.694453] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:00.867 20:13:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.867 20:13:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:14:00.867 20:13:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:00.867 20:13:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:00.867 20:13:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:14:00.867 20:13:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:14:00.867 20:13:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:14:00.867 20:13:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.867 20:13:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:00.867 Malloc0 00:14:00.867 [2024-07-14 20:13:49.781458] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:00.867 20:13:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.867 20:13:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:14:00.867 20:13:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:00.867 20:13:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:00.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:00.867 20:13:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=88466 00:14:00.867 20:13:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 88466 /var/tmp/bdevperf.sock 00:14:00.867 20:13:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 88466 ']' 00:14:00.867 20:13:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:00.867 20:13:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:00.867 20:13:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:14:00.867 20:13:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:00.867 20:13:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:00.867 20:13:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:14:00.867 20:13:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:00.867 20:13:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:14:00.867 20:13:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:14:00.867 20:13:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:00.867 20:13:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:00.867 { 00:14:00.867 "params": { 00:14:00.867 "name": "Nvme$subsystem", 00:14:00.867 "trtype": "$TEST_TRANSPORT", 00:14:00.867 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:00.867 "adrfam": "ipv4", 00:14:00.867 "trsvcid": "$NVMF_PORT", 00:14:00.867 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:00.867 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:00.867 "hdgst": ${hdgst:-false}, 00:14:00.867 "ddgst": ${ddgst:-false} 00:14:00.867 }, 00:14:00.867 "method": "bdev_nvme_attach_controller" 00:14:00.867 } 00:14:00.867 EOF 00:14:00.867 )") 00:14:00.867 20:13:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:14:00.867 20:13:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:14:00.867 20:13:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:14:00.867 20:13:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:00.867 "params": { 00:14:00.867 "name": "Nvme0", 00:14:00.867 "trtype": "tcp", 00:14:00.867 "traddr": "10.0.0.2", 00:14:00.867 "adrfam": "ipv4", 00:14:00.867 "trsvcid": "4420", 00:14:00.867 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:00.867 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:00.867 "hdgst": false, 00:14:00.867 "ddgst": false 00:14:00.867 }, 00:14:00.867 "method": "bdev_nvme_attach_controller" 00:14:00.867 }' 00:14:00.867 [2024-07-14 20:13:49.887303] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:14:00.867 [2024-07-14 20:13:49.887965] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88466 ] 00:14:01.125 [2024-07-14 20:13:50.032052] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:01.126 [2024-07-14 20:13:50.134016] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:01.411 Running I/O for 10 seconds... 00:14:01.981 20:13:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:01.981 20:13:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:14:01.981 20:13:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:14:01.981 20:13:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.981 20:13:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:01.981 20:13:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.981 20:13:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:01.981 20:13:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:14:01.981 20:13:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:14:01.981 20:13:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:14:01.981 20:13:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:14:01.981 20:13:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:14:01.981 20:13:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:14:01.981 20:13:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:14:01.981 20:13:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:14:01.981 20:13:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:14:01.981 20:13:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.981 20:13:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:01.981 20:13:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.981 20:13:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=963 00:14:01.981 20:13:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 963 -ge 100 ']' 00:14:01.981 20:13:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:14:01.981 20:13:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:14:01.981 20:13:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:14:01.982 20:13:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:01.982 20:13:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.982 20:13:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:01.982 [2024-07-14 20:13:51.011544] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb14840 is same with the state(5) to be set 00:14:01.982 [2024-07-14 20:13:51.011605] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb14840 is same with the state(5) to be set 00:14:01.982 [2024-07-14 20:13:51.011633] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb14840 is same with the state(5) to be set 00:14:01.982 [2024-07-14 20:13:51.011641] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb14840 is same with the state(5) to be set 00:14:01.982 [2024-07-14 20:13:51.011649] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb14840 is same with the state(5) to be set 00:14:01.982 [2024-07-14 20:13:51.011657] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb14840 is same with the state(5) to be set 00:14:01.982 [2024-07-14 20:13:51.011666] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb14840 is same with the state(5) to be set 00:14:01.982 [2024-07-14 20:13:51.011673] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb14840 is same with the state(5) to be set 00:14:01.982 [2024-07-14 20:13:51.011681] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb14840 is same with the state(5) to be set 00:14:01.982 [2024-07-14 20:13:51.011689] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb14840 is same with the state(5) to be set 00:14:01.982 [2024-07-14 20:13:51.011697] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb14840 is same with the state(5) to be set 00:14:01.982 [2024-07-14 20:13:51.011704] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb14840 is same with the state(5) to be set 00:14:01.982 [2024-07-14 20:13:51.011712] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb14840 is same with the state(5) to be set 00:14:01.982 [2024-07-14 20:13:51.011720] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb14840 is same with the state(5) to be set 00:14:01.982 [2024-07-14 20:13:51.011727] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb14840 is same with the state(5) to be set 00:14:01.982 [2024-07-14 20:13:51.011735] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb14840 is same with the state(5) to be set 00:14:01.982 [2024-07-14 20:13:51.011742] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb14840 is same with the state(5) to be set 00:14:01.982 [2024-07-14 20:13:51.011750] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb14840 is same with the state(5) to be set 00:14:01.982 [2024-07-14 20:13:51.011758] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb14840 is same with the state(5) to be set 00:14:01.982 [2024-07-14 20:13:51.011765] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb14840 is same with the state(5) to be set 00:14:01.982 [2024-07-14 20:13:51.011773] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb14840 is same with the state(5) to be set 00:14:01.982 [2024-07-14 20:13:51.011780] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb14840 is same with the state(5) to be set 00:14:01.982 [2024-07-14 20:13:51.011788] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb14840 is same with the state(5) to be set 00:14:01.982 [2024-07-14 20:13:51.011795] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb14840 is same with the state(5) to be set 00:14:01.982 [2024-07-14 20:13:51.011819] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb14840 is same with the state(5) to be set 00:14:01.982 [2024-07-14 20:13:51.011827] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb14840 is same with the state(5) to be set 00:14:01.982 [2024-07-14 20:13:51.011836] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb14840 is same with the state(5) to be set 00:14:01.982 [2024-07-14 20:13:51.011844] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb14840 is same with the state(5) to be set 00:14:01.982 [2024-07-14 20:13:51.011852] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb14840 is same with the state(5) to be set 00:14:01.982 [2024-07-14 20:13:51.011859] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb14840 is same with the state(5) to be set 00:14:01.982 [2024-07-14 20:13:51.011884] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb14840 is same with the state(5) to be set 00:14:01.982 [2024-07-14 20:13:51.011892] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb14840 is same with the state(5) to be set 00:14:01.982 [2024-07-14 20:13:51.011919] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb14840 is same with the state(5) to be set 00:14:01.982 [2024-07-14 20:13:51.011931] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb14840 is same with the state(5) to be set 00:14:01.982 [2024-07-14 20:13:51.011939] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb14840 is same with the state(5) to be set 00:14:01.982 [2024-07-14 20:13:51.012770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.982 [2024-07-14 20:13:51.012844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.982 [2024-07-14 20:13:51.012935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.982 [2024-07-14 20:13:51.012960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.982 [2024-07-14 20:13:51.012977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.982 [2024-07-14 20:13:51.012989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.982 [2024-07-14 20:13:51.013004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.982 [2024-07-14 20:13:51.013015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.982 [2024-07-14 20:13:51.013031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.982 [2024-07-14 20:13:51.013042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.982 [2024-07-14 20:13:51.013056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.982 [2024-07-14 20:13:51.013068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.982 [2024-07-14 20:13:51.013103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.982 [2024-07-14 20:13:51.013115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.982 [2024-07-14 20:13:51.013129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.982 [2024-07-14 20:13:51.013141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.982 [2024-07-14 20:13:51.013155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.982 [2024-07-14 20:13:51.013167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.982 [2024-07-14 20:13:51.013181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.982 [2024-07-14 20:13:51.013198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.982 [2024-07-14 20:13:51.013212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.982 [2024-07-14 20:13:51.013224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.982 [2024-07-14 20:13:51.013248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.982 [2024-07-14 20:13:51.013260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.982 [2024-07-14 20:13:51.013273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.982 [2024-07-14 20:13:51.013285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.982 [2024-07-14 20:13:51.013300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.982 [2024-07-14 20:13:51.013311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.982 [2024-07-14 20:13:51.013325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.983 [2024-07-14 20:13:51.013336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.983 [2024-07-14 20:13:51.013350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.983 [2024-07-14 20:13:51.013362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.983 [2024-07-14 20:13:51.013376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.983 [2024-07-14 20:13:51.013389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.983 [2024-07-14 20:13:51.013403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:2176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.983 [2024-07-14 20:13:51.013415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.983 [2024-07-14 20:13:51.013429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.983 [2024-07-14 20:13:51.013441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.983 [2024-07-14 20:13:51.013456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.983 [2024-07-14 20:13:51.013467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.983 [2024-07-14 20:13:51.013481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.983 [2024-07-14 20:13:51.013492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.983 [2024-07-14 20:13:51.013506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.983 [2024-07-14 20:13:51.013517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.983 [2024-07-14 20:13:51.013531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:2816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.983 [2024-07-14 20:13:51.013543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.983 [2024-07-14 20:13:51.013556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.983 [2024-07-14 20:13:51.013567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.983 [2024-07-14 20:13:51.013581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:3072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.983 [2024-07-14 20:13:51.013593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.983 [2024-07-14 20:13:51.013607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:3200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.983 [2024-07-14 20:13:51.013621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.983 [2024-07-14 20:13:51.013635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:3328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.983 [2024-07-14 20:13:51.013659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.983 [2024-07-14 20:13:51.013679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:3456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.983 [2024-07-14 20:13:51.013691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.983 [2024-07-14 20:13:51.013705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:3584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.983 [2024-07-14 20:13:51.013717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.983 [2024-07-14 20:13:51.013731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:3712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.983 [2024-07-14 20:13:51.013744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.983 [2024-07-14 20:13:51.013759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:3840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.983 [2024-07-14 20:13:51.013771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.983 [2024-07-14 20:13:51.013785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:3968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.983 [2024-07-14 20:13:51.013797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.983 [2024-07-14 20:13:51.013810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:4096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.983 [2024-07-14 20:13:51.013823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.983 [2024-07-14 20:13:51.013837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.983 [2024-07-14 20:13:51.013849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.983 [2024-07-14 20:13:51.013883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:4352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.983 [2024-07-14 20:13:51.013895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.983 [2024-07-14 20:13:51.013910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:4480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.983 [2024-07-14 20:13:51.013922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.983 [2024-07-14 20:13:51.013936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.983 [2024-07-14 20:13:51.013948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.983 [2024-07-14 20:13:51.013963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.983 [2024-07-14 20:13:51.013974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.983 [2024-07-14 20:13:51.013989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:4864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.983 [2024-07-14 20:13:51.014000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.983 [2024-07-14 20:13:51.014014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:4992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.983 [2024-07-14 20:13:51.014026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.983 [2024-07-14 20:13:51.014040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:5120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.983 [2024-07-14 20:13:51.014051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.983 [2024-07-14 20:13:51.014066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:5248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.983 [2024-07-14 20:13:51.014078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.983 [2024-07-14 20:13:51.014092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:5376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.983 [2024-07-14 20:13:51.014104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.983 [2024-07-14 20:13:51.014134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:5504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.983 [2024-07-14 20:13:51.014147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.983 [2024-07-14 20:13:51.014161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.983 [2024-07-14 20:13:51.014173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.983 [2024-07-14 20:13:51.014187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.983 [2024-07-14 20:13:51.014198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.983 [2024-07-14 20:13:51.014212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:5888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.983 [2024-07-14 20:13:51.014223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.983 [2024-07-14 20:13:51.014237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:6016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.983 [2024-07-14 20:13:51.014248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.984 [2024-07-14 20:13:51.014263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:6144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.984 [2024-07-14 20:13:51.014275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.984 [2024-07-14 20:13:51.014289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:6272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.984 [2024-07-14 20:13:51.014300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.984 [2024-07-14 20:13:51.014313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:6400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.984 [2024-07-14 20:13:51.014324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.984 [2024-07-14 20:13:51.014338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:6528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.984 [2024-07-14 20:13:51.014350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.984 [2024-07-14 20:13:51.014365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:6656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.984 [2024-07-14 20:13:51.014376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.984 [2024-07-14 20:13:51.014390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:6784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.984 [2024-07-14 20:13:51.014403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.984 [2024-07-14 20:13:51.014417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:6912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.984 [2024-07-14 20:13:51.014429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.984 [2024-07-14 20:13:51.014455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.984 [2024-07-14 20:13:51.014467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.984 [2024-07-14 20:13:51.014481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:7168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.984 [2024-07-14 20:13:51.014493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.984 [2024-07-14 20:13:51.014507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.984 [2024-07-14 20:13:51.014518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.984 [2024-07-14 20:13:51.014532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:7424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.984 [2024-07-14 20:13:51.014543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.984 [2024-07-14 20:13:51.014563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:7552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.984 [2024-07-14 20:13:51.014575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.984 [2024-07-14 20:13:51.014588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:7680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.984 [2024-07-14 20:13:51.014599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.984 [2024-07-14 20:13:51.014624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.984 [2024-07-14 20:13:51.014638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.984 [2024-07-14 20:13:51.014652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:7936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.984 [2024-07-14 20:13:51.014664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.984 [2024-07-14 20:13:51.014678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:8064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:01.984 [2024-07-14 20:13:51.014689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.984 [2024-07-14 20:13:51.014809] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x25d4790 was disconnected and freed. reset controller. 00:14:01.984 20:13:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.984 20:13:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:01.984 20:13:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.984 20:13:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:01.984 [2024-07-14 20:13:51.016268] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:14:01.984 task offset: 0 on job bdev=Nvme0n1 fails 00:14:01.984 00:14:01.984 Latency(us) 00:14:01.984 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:01.984 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:01.984 Job: Nvme0n1 ended in about 0.70 seconds with error 00:14:01.984 Verification LBA range: start 0x0 length 0x400 00:14:01.984 Nvme0n1 : 0.70 1462.11 91.38 91.38 0.00 39987.73 4796.04 44326.17 00:14:01.984 =================================================================================================================== 00:14:01.984 Total : 1462.11 91.38 91.38 0.00 39987.73 4796.04 44326.17 00:14:01.984 [2024-07-14 20:13:51.018750] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:01.984 [2024-07-14 20:13:51.018788] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2154c40 (9): Bad file descriptor 00:14:01.984 20:13:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.984 20:13:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:14:01.984 [2024-07-14 20:13:51.024445] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:03.361 20:13:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 88466 00:14:03.361 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (88466) - No such process 00:14:03.361 20:13:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:14:03.361 20:13:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:14:03.361 20:13:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:14:03.361 20:13:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:14:03.361 20:13:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:14:03.361 20:13:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:14:03.362 20:13:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:03.362 20:13:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:03.362 { 00:14:03.362 "params": { 00:14:03.362 "name": "Nvme$subsystem", 00:14:03.362 "trtype": "$TEST_TRANSPORT", 00:14:03.362 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:03.362 "adrfam": "ipv4", 00:14:03.362 "trsvcid": "$NVMF_PORT", 00:14:03.362 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:03.362 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:03.362 "hdgst": ${hdgst:-false}, 00:14:03.362 "ddgst": ${ddgst:-false} 00:14:03.362 }, 00:14:03.362 "method": "bdev_nvme_attach_controller" 00:14:03.362 } 00:14:03.362 EOF 00:14:03.362 )") 00:14:03.362 20:13:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:14:03.362 20:13:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:14:03.362 20:13:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:14:03.362 20:13:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:03.362 "params": { 00:14:03.362 "name": "Nvme0", 00:14:03.362 "trtype": "tcp", 00:14:03.362 "traddr": "10.0.0.2", 00:14:03.362 "adrfam": "ipv4", 00:14:03.362 "trsvcid": "4420", 00:14:03.362 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:03.362 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:03.362 "hdgst": false, 00:14:03.362 "ddgst": false 00:14:03.362 }, 00:14:03.362 "method": "bdev_nvme_attach_controller" 00:14:03.362 }' 00:14:03.362 [2024-07-14 20:13:52.090687] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:14:03.362 [2024-07-14 20:13:52.090783] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88515 ] 00:14:03.362 [2024-07-14 20:13:52.232790] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:03.362 [2024-07-14 20:13:52.335103] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:03.621 Running I/O for 1 seconds... 00:14:04.555 00:14:04.555 Latency(us) 00:14:04.555 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:04.555 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:04.555 Verification LBA range: start 0x0 length 0x400 00:14:04.555 Nvme0n1 : 1.04 1603.26 100.20 0.00 0.00 39201.65 6017.40 35508.60 00:14:04.555 =================================================================================================================== 00:14:04.555 Total : 1603.26 100.20 0.00 0.00 39201.65 6017.40 35508.60 00:14:05.122 20:13:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:14:05.122 20:13:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:14:05.122 20:13:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:14:05.122 20:13:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:14:05.122 20:13:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:14:05.122 20:13:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:05.122 20:13:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:14:05.122 20:13:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:05.122 20:13:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:14:05.122 20:13:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:05.122 20:13:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:05.122 rmmod nvme_tcp 00:14:05.122 rmmod nvme_fabrics 00:14:05.122 rmmod nvme_keyring 00:14:05.122 20:13:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:05.122 20:13:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:14:05.122 20:13:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:14:05.122 20:13:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 88392 ']' 00:14:05.122 20:13:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 88392 00:14:05.122 20:13:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@946 -- # '[' -z 88392 ']' 00:14:05.122 20:13:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@950 -- # kill -0 88392 00:14:05.122 20:13:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # uname 00:14:05.122 20:13:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:05.122 20:13:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 88392 00:14:05.122 20:13:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:14:05.122 20:13:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:14:05.122 killing process with pid 88392 00:14:05.122 20:13:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@964 -- # echo 'killing process with pid 88392' 00:14:05.122 20:13:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@965 -- # kill 88392 00:14:05.122 20:13:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@970 -- # wait 88392 00:14:05.381 [2024-07-14 20:13:54.392178] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:14:05.381 20:13:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:05.381 20:13:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:05.381 20:13:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:05.381 20:13:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:05.381 20:13:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:05.381 20:13:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:05.381 20:13:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:05.381 20:13:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:05.381 20:13:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:05.381 20:13:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:14:05.381 ************************************ 00:14:05.381 END TEST nvmf_host_management 00:14:05.381 ************************************ 00:14:05.381 00:14:05.381 real 0m6.390s 00:14:05.381 user 0m25.008s 00:14:05.381 sys 0m1.641s 00:14:05.381 20:13:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:05.381 20:13:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:05.639 20:13:54 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:05.639 20:13:54 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:05.639 20:13:54 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:05.639 20:13:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:05.639 ************************************ 00:14:05.639 START TEST nvmf_lvol 00:14:05.639 ************************************ 00:14:05.639 20:13:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:05.639 * Looking for test storage... 00:14:05.639 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:05.639 20:13:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:05.639 20:13:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:14:05.639 20:13:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:05.639 20:13:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:05.639 20:13:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:05.639 20:13:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:05.639 20:13:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:05.639 20:13:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:05.639 20:13:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:05.639 20:13:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:05.639 20:13:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:05.639 20:13:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:05.639 20:13:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:14:05.639 20:13:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:14:05.639 20:13:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:05.639 20:13:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:05.639 20:13:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:05.639 20:13:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:05.639 20:13:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:05.639 20:13:54 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:05.639 20:13:54 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:05.639 20:13:54 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:05.639 20:13:54 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.639 20:13:54 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.640 20:13:54 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.640 20:13:54 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:14:05.640 20:13:54 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.640 20:13:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:14:05.640 20:13:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:05.640 20:13:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:05.640 20:13:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:05.640 20:13:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:05.640 20:13:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:05.640 20:13:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:05.640 20:13:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:05.640 20:13:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:05.640 20:13:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:05.640 20:13:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:05.640 20:13:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:14:05.640 20:13:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:14:05.640 20:13:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:05.640 20:13:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:14:05.640 20:13:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:05.640 20:13:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:05.640 20:13:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:05.640 20:13:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:05.640 20:13:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:05.640 20:13:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:05.640 20:13:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:05.640 20:13:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:05.640 20:13:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:05.640 20:13:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:05.640 20:13:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:05.640 20:13:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:05.640 20:13:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:05.640 20:13:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:05.640 20:13:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:05.640 20:13:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:05.640 20:13:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:05.640 20:13:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:05.640 20:13:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:05.640 20:13:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:05.640 20:13:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:05.640 20:13:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:05.640 20:13:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:05.640 20:13:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:05.640 20:13:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:05.640 20:13:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:05.640 20:13:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:05.640 20:13:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:05.640 Cannot find device "nvmf_tgt_br" 00:14:05.640 20:13:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # true 00:14:05.640 20:13:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:05.640 Cannot find device "nvmf_tgt_br2" 00:14:05.640 20:13:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # true 00:14:05.640 20:13:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:05.640 20:13:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:05.640 Cannot find device "nvmf_tgt_br" 00:14:05.640 20:13:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # true 00:14:05.640 20:13:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:05.640 Cannot find device "nvmf_tgt_br2" 00:14:05.640 20:13:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # true 00:14:05.640 20:13:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:05.937 20:13:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:05.937 20:13:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:05.937 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:05.937 20:13:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:14:05.937 20:13:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:05.937 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:05.937 20:13:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:14:05.937 20:13:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:05.937 20:13:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:05.937 20:13:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:05.937 20:13:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:05.937 20:13:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:05.937 20:13:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:05.937 20:13:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:05.937 20:13:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:05.937 20:13:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:05.937 20:13:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:05.937 20:13:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:05.937 20:13:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:05.937 20:13:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:05.937 20:13:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:05.937 20:13:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:05.937 20:13:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:05.937 20:13:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:05.937 20:13:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:05.937 20:13:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:05.937 20:13:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:05.937 20:13:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:05.937 20:13:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:06.200 20:13:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:06.200 20:13:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:06.200 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:06.200 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.119 ms 00:14:06.200 00:14:06.201 --- 10.0.0.2 ping statistics --- 00:14:06.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:06.201 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:14:06.201 20:13:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:06.201 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:06.201 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:14:06.201 00:14:06.201 --- 10.0.0.3 ping statistics --- 00:14:06.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:06.201 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:14:06.201 20:13:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:06.201 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:06.201 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:14:06.201 00:14:06.201 --- 10.0.0.1 ping statistics --- 00:14:06.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:06.201 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:14:06.201 20:13:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:06.201 20:13:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@433 -- # return 0 00:14:06.201 20:13:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:06.201 20:13:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:06.201 20:13:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:06.201 20:13:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:06.201 20:13:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:06.201 20:13:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:06.201 20:13:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:06.201 20:13:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:14:06.201 20:13:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:06.201 20:13:55 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:06.201 20:13:55 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:06.201 20:13:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=88730 00:14:06.201 20:13:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:06.201 20:13:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 88730 00:14:06.201 20:13:55 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@827 -- # '[' -z 88730 ']' 00:14:06.201 20:13:55 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:06.201 20:13:55 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:06.201 20:13:55 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:06.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:06.201 20:13:55 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:06.201 20:13:55 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:06.201 [2024-07-14 20:13:55.097620] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:14:06.201 [2024-07-14 20:13:55.097742] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:06.201 [2024-07-14 20:13:55.239654] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:06.463 [2024-07-14 20:13:55.353414] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:06.463 [2024-07-14 20:13:55.353490] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:06.463 [2024-07-14 20:13:55.353505] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:06.463 [2024-07-14 20:13:55.353517] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:06.463 [2024-07-14 20:13:55.353526] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:06.463 [2024-07-14 20:13:55.354327] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:06.463 [2024-07-14 20:13:55.354531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:06.463 [2024-07-14 20:13:55.354539] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:07.028 20:13:55 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:07.028 20:13:55 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@860 -- # return 0 00:14:07.028 20:13:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:07.028 20:13:55 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:07.028 20:13:55 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:07.028 20:13:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:07.028 20:13:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:07.285 [2024-07-14 20:13:56.268160] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:07.285 20:13:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:07.543 20:13:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:14:07.543 20:13:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:08.109 20:13:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:14:08.109 20:13:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:14:08.109 20:13:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:14:08.674 20:13:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=5ce1b75b-6c22-45c5-9e6f-b8d5065be0e2 00:14:08.675 20:13:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 5ce1b75b-6c22-45c5-9e6f-b8d5065be0e2 lvol 20 00:14:08.675 20:13:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=c8d802f2-6ad7-41fc-82c4-5b17e8c7665a 00:14:08.675 20:13:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:08.931 20:13:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c8d802f2-6ad7-41fc-82c4-5b17e8c7665a 00:14:09.188 20:13:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:09.447 [2024-07-14 20:13:58.328690] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:09.447 20:13:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:09.705 20:13:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:14:09.705 20:13:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=88872 00:14:09.705 20:13:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:14:10.634 20:13:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot c8d802f2-6ad7-41fc-82c4-5b17e8c7665a MY_SNAPSHOT 00:14:10.891 20:13:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=2926f25b-6ad8-49a3-843c-d5bc4a9df7ab 00:14:10.891 20:13:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize c8d802f2-6ad7-41fc-82c4-5b17e8c7665a 30 00:14:11.150 20:14:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 2926f25b-6ad8-49a3-843c-d5bc4a9df7ab MY_CLONE 00:14:11.408 20:14:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=740bb8e2-bcd9-40cb-b55a-f83136798bb8 00:14:11.408 20:14:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 740bb8e2-bcd9-40cb-b55a-f83136798bb8 00:14:11.974 20:14:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 88872 00:14:20.083 Initializing NVMe Controllers 00:14:20.083 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:14:20.083 Controller IO queue size 128, less than required. 00:14:20.083 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:20.083 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:14:20.083 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:14:20.083 Initialization complete. Launching workers. 00:14:20.083 ======================================================== 00:14:20.083 Latency(us) 00:14:20.083 Device Information : IOPS MiB/s Average min max 00:14:20.083 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11442.30 44.70 11186.38 1448.38 120788.14 00:14:20.083 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11108.60 43.39 11524.27 3943.04 54179.53 00:14:20.083 ======================================================== 00:14:20.083 Total : 22550.90 88.09 11352.83 1448.38 120788.14 00:14:20.083 00:14:20.083 20:14:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:20.083 20:14:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete c8d802f2-6ad7-41fc-82c4-5b17e8c7665a 00:14:20.342 20:14:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5ce1b75b-6c22-45c5-9e6f-b8d5065be0e2 00:14:20.600 20:14:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:14:20.600 20:14:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:14:20.600 20:14:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:14:20.600 20:14:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:20.600 20:14:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:14:20.600 20:14:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:20.859 20:14:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:14:20.859 20:14:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:20.859 20:14:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:20.859 rmmod nvme_tcp 00:14:20.859 rmmod nvme_fabrics 00:14:20.859 rmmod nvme_keyring 00:14:20.859 20:14:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:20.859 20:14:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:14:20.859 20:14:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:14:20.859 20:14:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 88730 ']' 00:14:20.859 20:14:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 88730 00:14:20.859 20:14:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@946 -- # '[' -z 88730 ']' 00:14:20.859 20:14:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@950 -- # kill -0 88730 00:14:20.859 20:14:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # uname 00:14:20.859 20:14:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:20.859 20:14:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 88730 00:14:20.859 20:14:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:20.859 20:14:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:20.859 killing process with pid 88730 00:14:20.859 20:14:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@964 -- # echo 'killing process with pid 88730' 00:14:20.859 20:14:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@965 -- # kill 88730 00:14:20.859 20:14:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@970 -- # wait 88730 00:14:21.117 20:14:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:21.117 20:14:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:21.117 20:14:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:21.117 20:14:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:21.117 20:14:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:21.117 20:14:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:21.117 20:14:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:21.117 20:14:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:21.117 20:14:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:21.117 ************************************ 00:14:21.117 END TEST nvmf_lvol 00:14:21.117 ************************************ 00:14:21.117 00:14:21.117 real 0m15.664s 00:14:21.117 user 1m5.221s 00:14:21.117 sys 0m3.572s 00:14:21.117 20:14:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:21.117 20:14:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:21.376 20:14:10 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:21.376 20:14:10 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:21.376 20:14:10 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:21.376 20:14:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:21.376 ************************************ 00:14:21.376 START TEST nvmf_lvs_grow 00:14:21.376 ************************************ 00:14:21.376 20:14:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:21.376 * Looking for test storage... 00:14:21.376 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:21.376 20:14:10 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:21.376 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:14:21.376 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:21.376 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:21.376 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:21.376 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:21.376 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:21.376 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:21.376 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:21.376 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:21.376 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:21.376 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:21.376 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:14:21.376 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:14:21.376 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:21.376 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:21.376 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:21.376 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:21.376 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:21.376 20:14:10 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:21.376 20:14:10 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:21.376 20:14:10 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:21.376 20:14:10 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.376 20:14:10 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.376 20:14:10 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.376 20:14:10 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:14:21.376 20:14:10 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.376 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:14:21.376 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:21.376 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:21.376 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:21.376 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:21.376 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:21.376 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:21.376 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:21.376 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:21.376 20:14:10 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:21.376 20:14:10 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:21.376 20:14:10 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:14:21.376 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:21.376 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:21.376 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:21.376 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:21.376 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:21.376 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:21.376 20:14:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:21.376 20:14:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:21.376 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:21.376 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:21.376 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:21.376 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:21.376 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:21.376 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:21.376 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:21.376 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:21.376 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:21.376 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:21.376 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:21.376 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:21.377 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:21.377 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:21.377 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:21.377 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:21.377 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:21.377 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:21.377 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:21.377 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:21.377 Cannot find device "nvmf_tgt_br" 00:14:21.377 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # true 00:14:21.377 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:21.377 Cannot find device "nvmf_tgt_br2" 00:14:21.377 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # true 00:14:21.377 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:21.377 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:21.377 Cannot find device "nvmf_tgt_br" 00:14:21.377 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # true 00:14:21.377 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:21.377 Cannot find device "nvmf_tgt_br2" 00:14:21.377 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # true 00:14:21.377 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:21.635 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:21.635 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:21.635 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:21.635 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:14:21.635 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:21.635 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:21.635 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:14:21.635 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:21.635 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:21.635 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:21.635 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:21.635 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:21.635 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:21.635 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:21.635 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:21.635 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:21.635 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:21.635 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:21.635 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:21.635 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:21.635 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:21.635 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:21.635 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:21.635 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:21.635 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:21.635 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:21.635 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:21.635 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:21.636 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:21.636 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:21.636 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:21.636 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:21.636 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:14:21.636 00:14:21.636 --- 10.0.0.2 ping statistics --- 00:14:21.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:21.636 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:14:21.636 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:21.636 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:21.636 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:14:21.636 00:14:21.636 --- 10.0.0.3 ping statistics --- 00:14:21.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:21.636 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:14:21.636 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:21.636 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:21.636 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:14:21.636 00:14:21.636 --- 10.0.0.1 ping statistics --- 00:14:21.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:21.636 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:14:21.636 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:21.636 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@433 -- # return 0 00:14:21.636 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:21.636 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:21.636 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:21.636 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:21.636 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:21.636 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:21.636 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:21.895 20:14:10 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:14:21.895 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:21.895 20:14:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:21.895 20:14:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:21.895 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=89241 00:14:21.895 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 89241 00:14:21.895 20:14:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:21.895 20:14:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # '[' -z 89241 ']' 00:14:21.895 20:14:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:21.895 20:14:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:21.895 20:14:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:21.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:21.895 20:14:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:21.895 20:14:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:21.895 [2024-07-14 20:14:10.788767] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:14:21.895 [2024-07-14 20:14:10.788851] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:21.895 [2024-07-14 20:14:10.923883] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:22.153 [2024-07-14 20:14:11.019919] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:22.153 [2024-07-14 20:14:11.019979] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:22.153 [2024-07-14 20:14:11.019989] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:22.153 [2024-07-14 20:14:11.019998] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:22.153 [2024-07-14 20:14:11.020005] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:22.153 [2024-07-14 20:14:11.020030] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:22.720 20:14:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:22.720 20:14:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # return 0 00:14:22.720 20:14:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:22.720 20:14:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:22.720 20:14:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:22.978 20:14:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:22.978 20:14:11 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:23.236 [2024-07-14 20:14:12.094678] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:23.236 20:14:12 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:14:23.236 20:14:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:14:23.236 20:14:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:23.236 20:14:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:23.236 ************************************ 00:14:23.236 START TEST lvs_grow_clean 00:14:23.236 ************************************ 00:14:23.236 20:14:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1121 -- # lvs_grow 00:14:23.236 20:14:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:23.236 20:14:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:23.236 20:14:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:23.236 20:14:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:23.236 20:14:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:23.236 20:14:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:23.236 20:14:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:23.236 20:14:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:23.236 20:14:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:23.494 20:14:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:23.494 20:14:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:23.752 20:14:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=c86096a9-936d-4b12-be5d-70c44093f81d 00:14:23.752 20:14:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c86096a9-936d-4b12-be5d-70c44093f81d 00:14:23.752 20:14:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:24.010 20:14:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:24.010 20:14:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:24.010 20:14:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u c86096a9-936d-4b12-be5d-70c44093f81d lvol 150 00:14:24.273 20:14:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=519faa29-f39d-4919-9d90-8ee119211d65 00:14:24.274 20:14:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:24.274 20:14:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:24.274 [2024-07-14 20:14:13.289589] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:24.274 [2024-07-14 20:14:13.289672] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:24.274 true 00:14:24.274 20:14:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:24.274 20:14:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c86096a9-936d-4b12-be5d-70c44093f81d 00:14:24.538 20:14:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:24.538 20:14:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:24.796 20:14:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 519faa29-f39d-4919-9d90-8ee119211d65 00:14:25.053 20:14:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:25.311 [2024-07-14 20:14:14.170197] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:25.311 20:14:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:25.311 20:14:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:25.311 20:14:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=89403 00:14:25.311 20:14:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:25.311 20:14:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 89403 /var/tmp/bdevperf.sock 00:14:25.311 20:14:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@827 -- # '[' -z 89403 ']' 00:14:25.311 20:14:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:25.311 20:14:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:25.311 20:14:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:25.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:25.311 20:14:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:25.311 20:14:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:14:25.569 [2024-07-14 20:14:14.428789] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:14:25.570 [2024-07-14 20:14:14.428870] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89403 ] 00:14:25.570 [2024-07-14 20:14:14.565990] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:25.827 [2024-07-14 20:14:14.666975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:26.393 20:14:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:26.393 20:14:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # return 0 00:14:26.393 20:14:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:26.652 Nvme0n1 00:14:26.652 20:14:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:26.910 [ 00:14:26.910 { 00:14:26.910 "aliases": [ 00:14:26.910 "519faa29-f39d-4919-9d90-8ee119211d65" 00:14:26.910 ], 00:14:26.910 "assigned_rate_limits": { 00:14:26.910 "r_mbytes_per_sec": 0, 00:14:26.910 "rw_ios_per_sec": 0, 00:14:26.910 "rw_mbytes_per_sec": 0, 00:14:26.910 "w_mbytes_per_sec": 0 00:14:26.910 }, 00:14:26.910 "block_size": 4096, 00:14:26.910 "claimed": false, 00:14:26.910 "driver_specific": { 00:14:26.910 "mp_policy": "active_passive", 00:14:26.910 "nvme": [ 00:14:26.910 { 00:14:26.910 "ctrlr_data": { 00:14:26.910 "ana_reporting": false, 00:14:26.910 "cntlid": 1, 00:14:26.910 "firmware_revision": "24.05.1", 00:14:26.910 "model_number": "SPDK bdev Controller", 00:14:26.910 "multi_ctrlr": true, 00:14:26.910 "oacs": { 00:14:26.910 "firmware": 0, 00:14:26.910 "format": 0, 00:14:26.910 "ns_manage": 0, 00:14:26.910 "security": 0 00:14:26.910 }, 00:14:26.910 "serial_number": "SPDK0", 00:14:26.910 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:26.910 "vendor_id": "0x8086" 00:14:26.910 }, 00:14:26.910 "ns_data": { 00:14:26.910 "can_share": true, 00:14:26.910 "id": 1 00:14:26.910 }, 00:14:26.910 "trid": { 00:14:26.910 "adrfam": "IPv4", 00:14:26.910 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:26.910 "traddr": "10.0.0.2", 00:14:26.910 "trsvcid": "4420", 00:14:26.910 "trtype": "TCP" 00:14:26.910 }, 00:14:26.910 "vs": { 00:14:26.910 "nvme_version": "1.3" 00:14:26.910 } 00:14:26.910 } 00:14:26.910 ] 00:14:26.910 }, 00:14:26.910 "memory_domains": [ 00:14:26.910 { 00:14:26.910 "dma_device_id": "system", 00:14:26.910 "dma_device_type": 1 00:14:26.910 } 00:14:26.910 ], 00:14:26.910 "name": "Nvme0n1", 00:14:26.910 "num_blocks": 38912, 00:14:26.910 "product_name": "NVMe disk", 00:14:26.910 "supported_io_types": { 00:14:26.910 "abort": true, 00:14:26.910 "compare": true, 00:14:26.910 "compare_and_write": true, 00:14:26.910 "flush": true, 00:14:26.910 "nvme_admin": true, 00:14:26.910 "nvme_io": true, 00:14:26.910 "read": true, 00:14:26.910 "reset": true, 00:14:26.910 "unmap": true, 00:14:26.910 "write": true, 00:14:26.910 "write_zeroes": true 00:14:26.910 }, 00:14:26.910 "uuid": "519faa29-f39d-4919-9d90-8ee119211d65", 00:14:26.910 "zoned": false 00:14:26.910 } 00:14:26.910 ] 00:14:26.910 20:14:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=89454 00:14:26.910 20:14:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:26.910 20:14:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:27.169 Running I/O for 10 seconds... 00:14:28.102 Latency(us) 00:14:28.102 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:28.102 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:28.102 Nvme0n1 : 1.00 9271.00 36.21 0.00 0.00 0.00 0.00 0.00 00:14:28.102 =================================================================================================================== 00:14:28.102 Total : 9271.00 36.21 0.00 0.00 0.00 0.00 0.00 00:14:28.102 00:14:29.130 20:14:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u c86096a9-936d-4b12-be5d-70c44093f81d 00:14:29.130 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:29.130 Nvme0n1 : 2.00 9161.00 35.79 0.00 0.00 0.00 0.00 0.00 00:14:29.130 =================================================================================================================== 00:14:29.130 Total : 9161.00 35.79 0.00 0.00 0.00 0.00 0.00 00:14:29.130 00:14:29.389 true 00:14:29.389 20:14:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c86096a9-936d-4b12-be5d-70c44093f81d 00:14:29.389 20:14:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:29.647 20:14:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:29.647 20:14:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:29.647 20:14:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 89454 00:14:30.213 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:30.213 Nvme0n1 : 3.00 9118.33 35.62 0.00 0.00 0.00 0.00 0.00 00:14:30.213 =================================================================================================================== 00:14:30.213 Total : 9118.33 35.62 0.00 0.00 0.00 0.00 0.00 00:14:30.213 00:14:31.148 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:31.148 Nvme0n1 : 4.00 9111.00 35.59 0.00 0.00 0.00 0.00 0.00 00:14:31.148 =================================================================================================================== 00:14:31.148 Total : 9111.00 35.59 0.00 0.00 0.00 0.00 0.00 00:14:31.148 00:14:32.083 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:32.083 Nvme0n1 : 5.00 9100.60 35.55 0.00 0.00 0.00 0.00 0.00 00:14:32.083 =================================================================================================================== 00:14:32.083 Total : 9100.60 35.55 0.00 0.00 0.00 0.00 0.00 00:14:32.083 00:14:33.019 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:33.019 Nvme0n1 : 6.00 9075.50 35.45 0.00 0.00 0.00 0.00 0.00 00:14:33.019 =================================================================================================================== 00:14:33.019 Total : 9075.50 35.45 0.00 0.00 0.00 0.00 0.00 00:14:33.019 00:14:33.954 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:33.954 Nvme0n1 : 7.00 9068.14 35.42 0.00 0.00 0.00 0.00 0.00 00:14:33.954 =================================================================================================================== 00:14:33.954 Total : 9068.14 35.42 0.00 0.00 0.00 0.00 0.00 00:14:33.954 00:14:35.329 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:35.329 Nvme0n1 : 8.00 9059.75 35.39 0.00 0.00 0.00 0.00 0.00 00:14:35.329 =================================================================================================================== 00:14:35.329 Total : 9059.75 35.39 0.00 0.00 0.00 0.00 0.00 00:14:35.329 00:14:36.262 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:36.262 Nvme0n1 : 9.00 9043.22 35.33 0.00 0.00 0.00 0.00 0.00 00:14:36.262 =================================================================================================================== 00:14:36.263 Total : 9043.22 35.33 0.00 0.00 0.00 0.00 0.00 00:14:36.263 00:14:37.197 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:37.197 Nvme0n1 : 10.00 9033.70 35.29 0.00 0.00 0.00 0.00 0.00 00:14:37.197 =================================================================================================================== 00:14:37.197 Total : 9033.70 35.29 0.00 0.00 0.00 0.00 0.00 00:14:37.197 00:14:37.197 00:14:37.197 Latency(us) 00:14:37.197 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:37.197 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:37.197 Nvme0n1 : 10.01 9033.65 35.29 0.00 0.00 14159.04 6881.28 30980.65 00:14:37.197 =================================================================================================================== 00:14:37.197 Total : 9033.65 35.29 0.00 0.00 14159.04 6881.28 30980.65 00:14:37.197 0 00:14:37.197 20:14:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 89403 00:14:37.197 20:14:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@946 -- # '[' -z 89403 ']' 00:14:37.197 20:14:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # kill -0 89403 00:14:37.197 20:14:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # uname 00:14:37.197 20:14:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:37.197 20:14:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 89403 00:14:37.197 20:14:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:14:37.197 20:14:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:14:37.197 killing process with pid 89403 00:14:37.197 20:14:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 89403' 00:14:37.197 Received shutdown signal, test time was about 10.000000 seconds 00:14:37.197 00:14:37.197 Latency(us) 00:14:37.197 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:37.197 =================================================================================================================== 00:14:37.197 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:37.197 20:14:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@965 -- # kill 89403 00:14:37.197 20:14:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # wait 89403 00:14:37.455 20:14:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:37.714 20:14:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:37.714 20:14:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:14:37.714 20:14:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c86096a9-936d-4b12-be5d-70c44093f81d 00:14:37.972 20:14:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:14:37.972 20:14:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:14:37.972 20:14:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:38.230 [2024-07-14 20:14:27.277456] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:38.489 20:14:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c86096a9-936d-4b12-be5d-70c44093f81d 00:14:38.489 20:14:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:14:38.489 20:14:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c86096a9-936d-4b12-be5d-70c44093f81d 00:14:38.489 20:14:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:38.489 20:14:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:38.489 20:14:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:38.489 20:14:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:38.489 20:14:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:38.489 20:14:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:38.489 20:14:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:38.489 20:14:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:14:38.489 20:14:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c86096a9-936d-4b12-be5d-70c44093f81d 00:14:38.489 2024/07/14 20:14:27 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:c86096a9-936d-4b12-be5d-70c44093f81d], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:14:38.489 request: 00:14:38.489 { 00:14:38.489 "method": "bdev_lvol_get_lvstores", 00:14:38.489 "params": { 00:14:38.489 "uuid": "c86096a9-936d-4b12-be5d-70c44093f81d" 00:14:38.489 } 00:14:38.489 } 00:14:38.489 Got JSON-RPC error response 00:14:38.489 GoRPCClient: error on JSON-RPC call 00:14:38.489 20:14:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:14:38.489 20:14:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:38.489 20:14:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:38.489 20:14:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:38.489 20:14:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:38.748 aio_bdev 00:14:38.748 20:14:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 519faa29-f39d-4919-9d90-8ee119211d65 00:14:38.748 20:14:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@895 -- # local bdev_name=519faa29-f39d-4919-9d90-8ee119211d65 00:14:38.748 20:14:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:38.748 20:14:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local i 00:14:38.748 20:14:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:38.748 20:14:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:38.748 20:14:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:39.007 20:14:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 519faa29-f39d-4919-9d90-8ee119211d65 -t 2000 00:14:39.265 [ 00:14:39.265 { 00:14:39.265 "aliases": [ 00:14:39.265 "lvs/lvol" 00:14:39.265 ], 00:14:39.265 "assigned_rate_limits": { 00:14:39.265 "r_mbytes_per_sec": 0, 00:14:39.265 "rw_ios_per_sec": 0, 00:14:39.265 "rw_mbytes_per_sec": 0, 00:14:39.265 "w_mbytes_per_sec": 0 00:14:39.265 }, 00:14:39.265 "block_size": 4096, 00:14:39.265 "claimed": false, 00:14:39.265 "driver_specific": { 00:14:39.265 "lvol": { 00:14:39.265 "base_bdev": "aio_bdev", 00:14:39.265 "clone": false, 00:14:39.265 "esnap_clone": false, 00:14:39.265 "lvol_store_uuid": "c86096a9-936d-4b12-be5d-70c44093f81d", 00:14:39.265 "num_allocated_clusters": 38, 00:14:39.265 "snapshot": false, 00:14:39.265 "thin_provision": false 00:14:39.265 } 00:14:39.265 }, 00:14:39.265 "name": "519faa29-f39d-4919-9d90-8ee119211d65", 00:14:39.265 "num_blocks": 38912, 00:14:39.265 "product_name": "Logical Volume", 00:14:39.265 "supported_io_types": { 00:14:39.265 "abort": false, 00:14:39.265 "compare": false, 00:14:39.265 "compare_and_write": false, 00:14:39.265 "flush": false, 00:14:39.265 "nvme_admin": false, 00:14:39.265 "nvme_io": false, 00:14:39.265 "read": true, 00:14:39.265 "reset": true, 00:14:39.265 "unmap": true, 00:14:39.265 "write": true, 00:14:39.265 "write_zeroes": true 00:14:39.265 }, 00:14:39.265 "uuid": "519faa29-f39d-4919-9d90-8ee119211d65", 00:14:39.265 "zoned": false 00:14:39.265 } 00:14:39.265 ] 00:14:39.265 20:14:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # return 0 00:14:39.266 20:14:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c86096a9-936d-4b12-be5d-70c44093f81d 00:14:39.266 20:14:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:14:39.524 20:14:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:14:39.524 20:14:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c86096a9-936d-4b12-be5d-70c44093f81d 00:14:39.524 20:14:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:14:39.782 20:14:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:14:39.782 20:14:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 519faa29-f39d-4919-9d90-8ee119211d65 00:14:40.039 20:14:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c86096a9-936d-4b12-be5d-70c44093f81d 00:14:40.296 20:14:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:40.553 20:14:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:40.811 ************************************ 00:14:40.811 END TEST lvs_grow_clean 00:14:40.811 ************************************ 00:14:40.811 00:14:40.811 real 0m17.747s 00:14:40.811 user 0m16.870s 00:14:40.811 sys 0m2.292s 00:14:40.811 20:14:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:40.811 20:14:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:14:41.069 20:14:29 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:14:41.069 20:14:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:41.069 20:14:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:41.069 20:14:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:41.069 ************************************ 00:14:41.069 START TEST lvs_grow_dirty 00:14:41.069 ************************************ 00:14:41.069 20:14:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1121 -- # lvs_grow dirty 00:14:41.069 20:14:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:41.069 20:14:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:41.069 20:14:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:41.069 20:14:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:41.069 20:14:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:41.069 20:14:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:41.069 20:14:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:41.069 20:14:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:41.069 20:14:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:41.326 20:14:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:41.326 20:14:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:41.584 20:14:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=2033f8a1-35d9-45b1-9af0-739bc4f4f615 00:14:41.584 20:14:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2033f8a1-35d9-45b1-9af0-739bc4f4f615 00:14:41.584 20:14:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:41.841 20:14:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:41.841 20:14:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:41.841 20:14:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 2033f8a1-35d9-45b1-9af0-739bc4f4f615 lvol 150 00:14:42.100 20:14:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=6085ca55-c647-4d70-ac39-c0d90fc8e68c 00:14:42.100 20:14:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:42.100 20:14:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:42.100 [2024-07-14 20:14:31.128609] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:42.100 [2024-07-14 20:14:31.128696] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:42.100 true 00:14:42.100 20:14:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:42.100 20:14:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2033f8a1-35d9-45b1-9af0-739bc4f4f615 00:14:42.359 20:14:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:42.359 20:14:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:42.618 20:14:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6085ca55-c647-4d70-ac39-c0d90fc8e68c 00:14:42.876 20:14:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:43.135 [2024-07-14 20:14:32.029080] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:43.135 20:14:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:43.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:43.394 20:14:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=89842 00:14:43.394 20:14:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:43.394 20:14:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:43.394 20:14:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 89842 /var/tmp/bdevperf.sock 00:14:43.394 20:14:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 89842 ']' 00:14:43.394 20:14:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:43.394 20:14:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:43.394 20:14:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:43.394 20:14:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:43.394 20:14:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:43.394 [2024-07-14 20:14:32.379788] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:14:43.394 [2024-07-14 20:14:32.379886] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89842 ] 00:14:43.652 [2024-07-14 20:14:32.517904] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:43.652 [2024-07-14 20:14:32.610324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:44.219 20:14:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:44.219 20:14:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:14:44.477 20:14:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:44.477 Nvme0n1 00:14:44.734 20:14:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:44.991 [ 00:14:44.991 { 00:14:44.991 "aliases": [ 00:14:44.991 "6085ca55-c647-4d70-ac39-c0d90fc8e68c" 00:14:44.991 ], 00:14:44.991 "assigned_rate_limits": { 00:14:44.991 "r_mbytes_per_sec": 0, 00:14:44.991 "rw_ios_per_sec": 0, 00:14:44.991 "rw_mbytes_per_sec": 0, 00:14:44.991 "w_mbytes_per_sec": 0 00:14:44.991 }, 00:14:44.991 "block_size": 4096, 00:14:44.991 "claimed": false, 00:14:44.991 "driver_specific": { 00:14:44.991 "mp_policy": "active_passive", 00:14:44.991 "nvme": [ 00:14:44.991 { 00:14:44.991 "ctrlr_data": { 00:14:44.991 "ana_reporting": false, 00:14:44.991 "cntlid": 1, 00:14:44.991 "firmware_revision": "24.05.1", 00:14:44.991 "model_number": "SPDK bdev Controller", 00:14:44.991 "multi_ctrlr": true, 00:14:44.991 "oacs": { 00:14:44.991 "firmware": 0, 00:14:44.991 "format": 0, 00:14:44.991 "ns_manage": 0, 00:14:44.991 "security": 0 00:14:44.991 }, 00:14:44.991 "serial_number": "SPDK0", 00:14:44.991 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:44.991 "vendor_id": "0x8086" 00:14:44.991 }, 00:14:44.991 "ns_data": { 00:14:44.991 "can_share": true, 00:14:44.991 "id": 1 00:14:44.991 }, 00:14:44.991 "trid": { 00:14:44.991 "adrfam": "IPv4", 00:14:44.991 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:44.991 "traddr": "10.0.0.2", 00:14:44.991 "trsvcid": "4420", 00:14:44.991 "trtype": "TCP" 00:14:44.991 }, 00:14:44.991 "vs": { 00:14:44.991 "nvme_version": "1.3" 00:14:44.991 } 00:14:44.991 } 00:14:44.991 ] 00:14:44.991 }, 00:14:44.991 "memory_domains": [ 00:14:44.991 { 00:14:44.991 "dma_device_id": "system", 00:14:44.991 "dma_device_type": 1 00:14:44.991 } 00:14:44.991 ], 00:14:44.991 "name": "Nvme0n1", 00:14:44.991 "num_blocks": 38912, 00:14:44.991 "product_name": "NVMe disk", 00:14:44.991 "supported_io_types": { 00:14:44.991 "abort": true, 00:14:44.991 "compare": true, 00:14:44.991 "compare_and_write": true, 00:14:44.991 "flush": true, 00:14:44.991 "nvme_admin": true, 00:14:44.991 "nvme_io": true, 00:14:44.991 "read": true, 00:14:44.991 "reset": true, 00:14:44.991 "unmap": true, 00:14:44.991 "write": true, 00:14:44.991 "write_zeroes": true 00:14:44.991 }, 00:14:44.991 "uuid": "6085ca55-c647-4d70-ac39-c0d90fc8e68c", 00:14:44.991 "zoned": false 00:14:44.991 } 00:14:44.991 ] 00:14:44.991 20:14:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:44.991 20:14:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=89890 00:14:44.991 20:14:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:44.991 Running I/O for 10 seconds... 00:14:45.925 Latency(us) 00:14:45.925 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:45.925 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:45.925 Nvme0n1 : 1.00 9337.00 36.47 0.00 0.00 0.00 0.00 0.00 00:14:45.925 =================================================================================================================== 00:14:45.925 Total : 9337.00 36.47 0.00 0.00 0.00 0.00 0.00 00:14:45.925 00:14:46.859 20:14:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 2033f8a1-35d9-45b1-9af0-739bc4f4f615 00:14:46.859 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:46.859 Nvme0n1 : 2.00 9167.50 35.81 0.00 0.00 0.00 0.00 0.00 00:14:46.859 =================================================================================================================== 00:14:46.859 Total : 9167.50 35.81 0.00 0.00 0.00 0.00 0.00 00:14:46.859 00:14:47.118 true 00:14:47.118 20:14:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2033f8a1-35d9-45b1-9af0-739bc4f4f615 00:14:47.118 20:14:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:47.684 20:14:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:47.684 20:14:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:47.684 20:14:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 89890 00:14:47.943 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:47.943 Nvme0n1 : 3.00 9108.67 35.58 0.00 0.00 0.00 0.00 0.00 00:14:47.943 =================================================================================================================== 00:14:47.943 Total : 9108.67 35.58 0.00 0.00 0.00 0.00 0.00 00:14:47.943 00:14:48.877 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:48.877 Nvme0n1 : 4.00 9103.75 35.56 0.00 0.00 0.00 0.00 0.00 00:14:48.877 =================================================================================================================== 00:14:48.877 Total : 9103.75 35.56 0.00 0.00 0.00 0.00 0.00 00:14:48.877 00:14:50.247 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:50.247 Nvme0n1 : 5.00 9078.80 35.46 0.00 0.00 0.00 0.00 0.00 00:14:50.247 =================================================================================================================== 00:14:50.247 Total : 9078.80 35.46 0.00 0.00 0.00 0.00 0.00 00:14:50.247 00:14:51.178 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:51.178 Nvme0n1 : 6.00 9024.50 35.25 0.00 0.00 0.00 0.00 0.00 00:14:51.178 =================================================================================================================== 00:14:51.178 Total : 9024.50 35.25 0.00 0.00 0.00 0.00 0.00 00:14:51.178 00:14:52.109 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:52.109 Nvme0n1 : 7.00 8906.43 34.79 0.00 0.00 0.00 0.00 0.00 00:14:52.109 =================================================================================================================== 00:14:52.109 Total : 8906.43 34.79 0.00 0.00 0.00 0.00 0.00 00:14:52.109 00:14:53.041 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:53.041 Nvme0n1 : 8.00 8827.38 34.48 0.00 0.00 0.00 0.00 0.00 00:14:53.041 =================================================================================================================== 00:14:53.041 Total : 8827.38 34.48 0.00 0.00 0.00 0.00 0.00 00:14:53.041 00:14:53.974 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:53.974 Nvme0n1 : 9.00 8753.89 34.19 0.00 0.00 0.00 0.00 0.00 00:14:53.974 =================================================================================================================== 00:14:53.975 Total : 8753.89 34.19 0.00 0.00 0.00 0.00 0.00 00:14:53.975 00:14:54.908 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:54.908 Nvme0n1 : 10.00 8687.80 33.94 0.00 0.00 0.00 0.00 0.00 00:14:54.908 =================================================================================================================== 00:14:54.908 Total : 8687.80 33.94 0.00 0.00 0.00 0.00 0.00 00:14:54.908 00:14:54.908 00:14:54.908 Latency(us) 00:14:54.908 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:54.908 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:54.908 Nvme0n1 : 10.01 8693.54 33.96 0.00 0.00 14718.79 5064.15 56718.43 00:14:54.908 =================================================================================================================== 00:14:54.908 Total : 8693.54 33.96 0.00 0.00 14718.79 5064.15 56718.43 00:14:54.908 0 00:14:54.908 20:14:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 89842 00:14:54.908 20:14:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@946 -- # '[' -z 89842 ']' 00:14:54.908 20:14:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # kill -0 89842 00:14:54.908 20:14:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # uname 00:14:54.908 20:14:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:54.908 20:14:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 89842 00:14:55.167 20:14:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:14:55.167 killing process with pid 89842 00:14:55.167 20:14:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:14:55.167 20:14:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # echo 'killing process with pid 89842' 00:14:55.167 Received shutdown signal, test time was about 10.000000 seconds 00:14:55.167 00:14:55.167 Latency(us) 00:14:55.167 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:55.167 =================================================================================================================== 00:14:55.167 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:55.167 20:14:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@965 -- # kill 89842 00:14:55.167 20:14:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # wait 89842 00:14:55.167 20:14:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:55.426 20:14:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:55.994 20:14:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2033f8a1-35d9-45b1-9af0-739bc4f4f615 00:14:55.994 20:14:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:14:55.994 20:14:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:14:55.994 20:14:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:14:55.994 20:14:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 89241 00:14:55.994 20:14:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 89241 00:14:55.994 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 89241 Killed "${NVMF_APP[@]}" "$@" 00:14:55.994 20:14:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:14:55.994 20:14:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:14:55.994 20:14:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:55.994 20:14:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:55.994 20:14:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:55.994 20:14:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=90053 00:14:55.994 20:14:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 90053 00:14:55.994 20:14:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 90053 ']' 00:14:55.994 20:14:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:55.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:55.994 20:14:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:55.994 20:14:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:55.994 20:14:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:55.994 20:14:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:55.994 20:14:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:56.253 [2024-07-14 20:14:45.111148] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:14:56.253 [2024-07-14 20:14:45.111265] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:56.253 [2024-07-14 20:14:45.250521] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:56.512 [2024-07-14 20:14:45.360896] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:56.512 [2024-07-14 20:14:45.360975] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:56.512 [2024-07-14 20:14:45.360987] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:56.512 [2024-07-14 20:14:45.360995] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:56.512 [2024-07-14 20:14:45.361002] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:56.512 [2024-07-14 20:14:45.361035] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:57.079 20:14:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:57.079 20:14:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:14:57.079 20:14:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:57.079 20:14:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:57.079 20:14:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:57.079 20:14:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:57.079 20:14:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:57.337 [2024-07-14 20:14:46.371557] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:14:57.337 [2024-07-14 20:14:46.371993] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:14:57.337 [2024-07-14 20:14:46.372193] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:14:57.596 20:14:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:14:57.596 20:14:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 6085ca55-c647-4d70-ac39-c0d90fc8e68c 00:14:57.596 20:14:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=6085ca55-c647-4d70-ac39-c0d90fc8e68c 00:14:57.596 20:14:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:57.596 20:14:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:14:57.596 20:14:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:57.596 20:14:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:57.596 20:14:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:57.856 20:14:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6085ca55-c647-4d70-ac39-c0d90fc8e68c -t 2000 00:14:57.856 [ 00:14:57.856 { 00:14:57.856 "aliases": [ 00:14:57.856 "lvs/lvol" 00:14:57.856 ], 00:14:57.856 "assigned_rate_limits": { 00:14:57.856 "r_mbytes_per_sec": 0, 00:14:57.856 "rw_ios_per_sec": 0, 00:14:57.856 "rw_mbytes_per_sec": 0, 00:14:57.856 "w_mbytes_per_sec": 0 00:14:57.856 }, 00:14:57.856 "block_size": 4096, 00:14:57.856 "claimed": false, 00:14:57.856 "driver_specific": { 00:14:57.856 "lvol": { 00:14:57.856 "base_bdev": "aio_bdev", 00:14:57.856 "clone": false, 00:14:57.856 "esnap_clone": false, 00:14:57.856 "lvol_store_uuid": "2033f8a1-35d9-45b1-9af0-739bc4f4f615", 00:14:57.856 "num_allocated_clusters": 38, 00:14:57.856 "snapshot": false, 00:14:57.856 "thin_provision": false 00:14:57.856 } 00:14:57.856 }, 00:14:57.856 "name": "6085ca55-c647-4d70-ac39-c0d90fc8e68c", 00:14:57.856 "num_blocks": 38912, 00:14:57.856 "product_name": "Logical Volume", 00:14:57.856 "supported_io_types": { 00:14:57.856 "abort": false, 00:14:57.856 "compare": false, 00:14:57.856 "compare_and_write": false, 00:14:57.856 "flush": false, 00:14:57.856 "nvme_admin": false, 00:14:57.856 "nvme_io": false, 00:14:57.856 "read": true, 00:14:57.856 "reset": true, 00:14:57.856 "unmap": true, 00:14:57.856 "write": true, 00:14:57.856 "write_zeroes": true 00:14:57.856 }, 00:14:57.856 "uuid": "6085ca55-c647-4d70-ac39-c0d90fc8e68c", 00:14:57.856 "zoned": false 00:14:57.856 } 00:14:57.856 ] 00:14:57.856 20:14:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:14:57.856 20:14:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2033f8a1-35d9-45b1-9af0-739bc4f4f615 00:14:57.856 20:14:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:14:58.114 20:14:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:14:58.114 20:14:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2033f8a1-35d9-45b1-9af0-739bc4f4f615 00:14:58.114 20:14:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:14:58.373 20:14:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:14:58.373 20:14:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:58.632 [2024-07-14 20:14:47.648799] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:58.632 20:14:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2033f8a1-35d9-45b1-9af0-739bc4f4f615 00:14:58.632 20:14:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:14:58.632 20:14:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2033f8a1-35d9-45b1-9af0-739bc4f4f615 00:14:58.632 20:14:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:58.632 20:14:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:58.632 20:14:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:58.632 20:14:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:58.632 20:14:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:58.632 20:14:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:58.632 20:14:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:58.632 20:14:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:14:58.632 20:14:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2033f8a1-35d9-45b1-9af0-739bc4f4f615 00:14:58.890 2024/07/14 20:14:47 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:2033f8a1-35d9-45b1-9af0-739bc4f4f615], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:14:58.890 request: 00:14:58.890 { 00:14:58.890 "method": "bdev_lvol_get_lvstores", 00:14:58.890 "params": { 00:14:58.890 "uuid": "2033f8a1-35d9-45b1-9af0-739bc4f4f615" 00:14:58.890 } 00:14:58.890 } 00:14:58.890 Got JSON-RPC error response 00:14:58.890 GoRPCClient: error on JSON-RPC call 00:14:59.149 20:14:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:14:59.149 20:14:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:59.149 20:14:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:59.149 20:14:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:59.149 20:14:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:59.149 aio_bdev 00:14:59.149 20:14:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 6085ca55-c647-4d70-ac39-c0d90fc8e68c 00:14:59.149 20:14:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=6085ca55-c647-4d70-ac39-c0d90fc8e68c 00:14:59.149 20:14:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:59.149 20:14:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:14:59.149 20:14:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:59.149 20:14:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:59.149 20:14:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:59.409 20:14:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6085ca55-c647-4d70-ac39-c0d90fc8e68c -t 2000 00:14:59.668 [ 00:14:59.668 { 00:14:59.668 "aliases": [ 00:14:59.668 "lvs/lvol" 00:14:59.668 ], 00:14:59.668 "assigned_rate_limits": { 00:14:59.668 "r_mbytes_per_sec": 0, 00:14:59.668 "rw_ios_per_sec": 0, 00:14:59.668 "rw_mbytes_per_sec": 0, 00:14:59.668 "w_mbytes_per_sec": 0 00:14:59.668 }, 00:14:59.668 "block_size": 4096, 00:14:59.668 "claimed": false, 00:14:59.668 "driver_specific": { 00:14:59.668 "lvol": { 00:14:59.668 "base_bdev": "aio_bdev", 00:14:59.668 "clone": false, 00:14:59.668 "esnap_clone": false, 00:14:59.668 "lvol_store_uuid": "2033f8a1-35d9-45b1-9af0-739bc4f4f615", 00:14:59.668 "num_allocated_clusters": 38, 00:14:59.668 "snapshot": false, 00:14:59.668 "thin_provision": false 00:14:59.668 } 00:14:59.668 }, 00:14:59.668 "name": "6085ca55-c647-4d70-ac39-c0d90fc8e68c", 00:14:59.668 "num_blocks": 38912, 00:14:59.668 "product_name": "Logical Volume", 00:14:59.668 "supported_io_types": { 00:14:59.668 "abort": false, 00:14:59.668 "compare": false, 00:14:59.668 "compare_and_write": false, 00:14:59.668 "flush": false, 00:14:59.668 "nvme_admin": false, 00:14:59.668 "nvme_io": false, 00:14:59.668 "read": true, 00:14:59.668 "reset": true, 00:14:59.668 "unmap": true, 00:14:59.668 "write": true, 00:14:59.668 "write_zeroes": true 00:14:59.668 }, 00:14:59.668 "uuid": "6085ca55-c647-4d70-ac39-c0d90fc8e68c", 00:14:59.668 "zoned": false 00:14:59.668 } 00:14:59.668 ] 00:14:59.668 20:14:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:14:59.668 20:14:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2033f8a1-35d9-45b1-9af0-739bc4f4f615 00:14:59.668 20:14:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:14:59.927 20:14:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:14:59.927 20:14:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:14:59.927 20:14:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2033f8a1-35d9-45b1-9af0-739bc4f4f615 00:15:00.187 20:14:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:15:00.187 20:14:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 6085ca55-c647-4d70-ac39-c0d90fc8e68c 00:15:00.470 20:14:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2033f8a1-35d9-45b1-9af0-739bc4f4f615 00:15:00.731 20:14:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:00.989 20:14:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:15:01.247 ************************************ 00:15:01.247 END TEST lvs_grow_dirty 00:15:01.247 ************************************ 00:15:01.247 00:15:01.247 real 0m20.298s 00:15:01.247 user 0m41.201s 00:15:01.247 sys 0m8.652s 00:15:01.247 20:14:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:01.247 20:14:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:01.247 20:14:50 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:15:01.247 20:14:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@804 -- # type=--id 00:15:01.247 20:14:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@805 -- # id=0 00:15:01.247 20:14:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:15:01.247 20:14:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:01.247 20:14:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:15:01.247 20:14:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:15:01.247 20:14:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # for n in $shm_files 00:15:01.247 20:14:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:01.247 nvmf_trace.0 00:15:01.247 20:14:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # return 0 00:15:01.247 20:14:50 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:15:01.247 20:14:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:01.247 20:14:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:15:01.506 20:14:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:01.506 20:14:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:15:01.506 20:14:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:01.506 20:14:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:01.506 rmmod nvme_tcp 00:15:01.506 rmmod nvme_fabrics 00:15:01.506 rmmod nvme_keyring 00:15:01.506 20:14:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:01.506 20:14:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:15:01.506 20:14:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:15:01.506 20:14:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 90053 ']' 00:15:01.506 20:14:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 90053 00:15:01.506 20:14:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@946 -- # '[' -z 90053 ']' 00:15:01.506 20:14:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # kill -0 90053 00:15:01.506 20:14:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # uname 00:15:01.506 20:14:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:01.506 20:14:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 90053 00:15:01.506 20:14:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:01.506 20:14:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:01.506 killing process with pid 90053 00:15:01.506 20:14:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # echo 'killing process with pid 90053' 00:15:01.506 20:14:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@965 -- # kill 90053 00:15:01.506 20:14:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # wait 90053 00:15:02.074 20:14:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:02.074 20:14:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:02.074 20:14:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:02.074 20:14:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:02.074 20:14:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:02.074 20:14:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:02.074 20:14:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:02.074 20:14:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:02.074 20:14:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:02.074 00:15:02.074 real 0m40.669s 00:15:02.074 user 1m4.402s 00:15:02.074 sys 0m11.748s 00:15:02.074 20:14:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:02.074 20:14:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:02.074 ************************************ 00:15:02.074 END TEST nvmf_lvs_grow 00:15:02.074 ************************************ 00:15:02.074 20:14:50 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:02.074 20:14:50 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:02.074 20:14:50 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:02.074 20:14:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:02.074 ************************************ 00:15:02.074 START TEST nvmf_bdev_io_wait 00:15:02.074 ************************************ 00:15:02.074 20:14:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:02.074 * Looking for test storage... 00:15:02.074 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:02.074 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:02.074 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:15:02.074 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:02.074 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:02.074 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:02.074 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:02.074 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:02.074 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:02.074 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:02.074 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:02.074 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:02.074 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:02.074 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:15:02.074 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:15:02.074 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:02.074 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:02.074 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:02.074 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:02.074 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:02.074 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:02.074 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:02.074 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:02.074 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.074 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.074 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.074 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:15:02.074 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.074 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:15:02.074 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:02.074 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:02.074 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:02.074 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:02.074 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:02.074 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:02.074 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:02.074 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:02.074 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:02.074 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:02.074 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:15:02.074 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:02.074 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:02.074 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:02.074 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:02.074 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:02.074 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:02.074 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:02.074 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:02.074 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:02.074 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:02.074 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:02.074 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:02.074 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:02.074 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:02.074 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:02.074 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:02.074 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:02.074 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:02.074 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:02.074 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:02.074 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:02.074 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:02.074 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:02.074 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:02.074 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:02.074 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:02.074 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:02.074 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:02.074 Cannot find device "nvmf_tgt_br" 00:15:02.074 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # true 00:15:02.074 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:02.074 Cannot find device "nvmf_tgt_br2" 00:15:02.074 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # true 00:15:02.074 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:02.074 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:02.074 Cannot find device "nvmf_tgt_br" 00:15:02.074 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # true 00:15:02.074 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:02.074 Cannot find device "nvmf_tgt_br2" 00:15:02.074 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # true 00:15:02.074 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:02.333 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:02.333 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:02.333 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:02.333 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:15:02.333 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:02.333 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:02.333 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:15:02.333 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:02.333 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:02.333 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:02.333 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:02.333 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:02.333 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:02.333 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:02.333 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:02.333 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:02.333 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:02.333 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:02.333 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:02.333 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:02.333 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:02.333 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:02.333 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:02.333 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:02.333 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:02.333 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:02.333 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:02.333 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:02.333 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:02.333 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:02.333 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:02.333 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:02.333 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:15:02.333 00:15:02.333 --- 10.0.0.2 ping statistics --- 00:15:02.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:02.333 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:15:02.333 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:02.333 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:02.333 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:15:02.333 00:15:02.333 --- 10.0.0.3 ping statistics --- 00:15:02.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:02.333 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:15:02.333 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:02.333 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:02.333 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:15:02.333 00:15:02.333 --- 10.0.0.1 ping statistics --- 00:15:02.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:02.333 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:15:02.333 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:02.333 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@433 -- # return 0 00:15:02.333 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:02.333 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:02.333 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:02.333 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:02.333 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:02.333 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:02.333 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:02.592 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:15:02.592 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:02.592 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:02.592 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:02.592 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=90469 00:15:02.592 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:15:02.592 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 90469 00:15:02.592 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@827 -- # '[' -z 90469 ']' 00:15:02.592 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:02.592 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:02.592 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:02.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:02.592 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:02.592 20:14:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:02.592 [2024-07-14 20:14:51.492117] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:15:02.592 [2024-07-14 20:14:51.492248] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:02.592 [2024-07-14 20:14:51.636433] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:02.850 [2024-07-14 20:14:51.733383] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:02.850 [2024-07-14 20:14:51.733471] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:02.850 [2024-07-14 20:14:51.733481] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:02.850 [2024-07-14 20:14:51.733489] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:02.850 [2024-07-14 20:14:51.733495] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:02.850 [2024-07-14 20:14:51.733664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:02.850 [2024-07-14 20:14:51.734010] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:02.850 [2024-07-14 20:14:51.734822] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:02.850 [2024-07-14 20:14:51.734784] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:03.418 20:14:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:03.418 20:14:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # return 0 00:15:03.418 20:14:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:03.418 20:14:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:03.418 20:14:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:03.418 20:14:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:03.418 20:14:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:15:03.418 20:14:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:03.418 20:14:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:03.418 20:14:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.418 20:14:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:15:03.418 20:14:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:03.418 20:14:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:03.678 20:14:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.678 20:14:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:03.678 20:14:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:03.678 20:14:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:03.678 [2024-07-14 20:14:52.575449] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:03.678 20:14:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.678 20:14:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:03.678 20:14:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:03.678 20:14:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:03.678 Malloc0 00:15:03.678 20:14:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.678 20:14:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:03.678 20:14:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:03.678 20:14:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:03.678 20:14:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.678 20:14:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:03.678 20:14:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:03.678 20:14:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:03.678 20:14:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.678 20:14:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:03.678 20:14:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:03.678 20:14:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:03.678 [2024-07-14 20:14:52.640377] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:03.678 20:14:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.679 20:14:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=90522 00:15:03.679 20:14:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:15:03.679 20:14:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:15:03.679 20:14:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:03.679 20:14:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=90524 00:15:03.679 20:14:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:03.679 20:14:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:03.679 20:14:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:03.679 { 00:15:03.679 "params": { 00:15:03.679 "name": "Nvme$subsystem", 00:15:03.679 "trtype": "$TEST_TRANSPORT", 00:15:03.679 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:03.679 "adrfam": "ipv4", 00:15:03.679 "trsvcid": "$NVMF_PORT", 00:15:03.679 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:03.679 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:03.679 "hdgst": ${hdgst:-false}, 00:15:03.679 "ddgst": ${ddgst:-false} 00:15:03.679 }, 00:15:03.679 "method": "bdev_nvme_attach_controller" 00:15:03.679 } 00:15:03.679 EOF 00:15:03.679 )") 00:15:03.679 20:14:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:15:03.679 20:14:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:15:03.679 20:14:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=90526 00:15:03.679 20:14:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:03.679 20:14:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:03.679 20:14:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:03.679 20:14:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:03.679 { 00:15:03.679 "params": { 00:15:03.679 "name": "Nvme$subsystem", 00:15:03.679 "trtype": "$TEST_TRANSPORT", 00:15:03.679 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:03.679 "adrfam": "ipv4", 00:15:03.679 "trsvcid": "$NVMF_PORT", 00:15:03.679 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:03.679 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:03.679 "hdgst": ${hdgst:-false}, 00:15:03.679 "ddgst": ${ddgst:-false} 00:15:03.679 }, 00:15:03.679 "method": "bdev_nvme_attach_controller" 00:15:03.679 } 00:15:03.679 EOF 00:15:03.679 )") 00:15:03.679 20:14:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:03.679 20:14:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:15:03.679 20:14:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=90529 00:15:03.679 20:14:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:15:03.679 20:14:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:15:03.679 20:14:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:03.679 20:14:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:15:03.679 20:14:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:03.679 20:14:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:03.679 20:14:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:03.679 20:14:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:03.679 { 00:15:03.679 "params": { 00:15:03.679 "name": "Nvme$subsystem", 00:15:03.679 "trtype": "$TEST_TRANSPORT", 00:15:03.679 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:03.679 "adrfam": "ipv4", 00:15:03.679 "trsvcid": "$NVMF_PORT", 00:15:03.679 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:03.679 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:03.679 "hdgst": ${hdgst:-false}, 00:15:03.679 "ddgst": ${ddgst:-false} 00:15:03.679 }, 00:15:03.679 "method": "bdev_nvme_attach_controller" 00:15:03.679 } 00:15:03.679 EOF 00:15:03.679 )") 00:15:03.679 20:14:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:03.679 20:14:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:03.679 20:14:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:03.679 20:14:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:03.679 "params": { 00:15:03.679 "name": "Nvme1", 00:15:03.679 "trtype": "tcp", 00:15:03.679 "traddr": "10.0.0.2", 00:15:03.679 "adrfam": "ipv4", 00:15:03.679 "trsvcid": "4420", 00:15:03.679 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:03.679 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:03.679 "hdgst": false, 00:15:03.679 "ddgst": false 00:15:03.679 }, 00:15:03.679 "method": "bdev_nvme_attach_controller" 00:15:03.679 }' 00:15:03.679 20:14:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:03.679 20:14:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:03.679 20:14:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:03.679 "params": { 00:15:03.679 "name": "Nvme1", 00:15:03.679 "trtype": "tcp", 00:15:03.679 "traddr": "10.0.0.2", 00:15:03.679 "adrfam": "ipv4", 00:15:03.679 "trsvcid": "4420", 00:15:03.679 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:03.679 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:03.679 "hdgst": false, 00:15:03.679 "ddgst": false 00:15:03.679 }, 00:15:03.679 "method": "bdev_nvme_attach_controller" 00:15:03.679 }' 00:15:03.679 20:14:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:03.679 20:14:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:03.679 20:14:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:15:03.679 20:14:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:03.679 "params": { 00:15:03.679 "name": "Nvme1", 00:15:03.679 "trtype": "tcp", 00:15:03.679 "traddr": "10.0.0.2", 00:15:03.679 "adrfam": "ipv4", 00:15:03.679 "trsvcid": "4420", 00:15:03.679 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:03.679 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:03.679 "hdgst": false, 00:15:03.679 "ddgst": false 00:15:03.679 }, 00:15:03.679 "method": "bdev_nvme_attach_controller" 00:15:03.679 }' 00:15:03.679 20:14:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:03.679 20:14:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:03.679 20:14:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:03.679 20:14:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:03.679 { 00:15:03.679 "params": { 00:15:03.679 "name": "Nvme$subsystem", 00:15:03.679 "trtype": "$TEST_TRANSPORT", 00:15:03.679 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:03.679 "adrfam": "ipv4", 00:15:03.679 "trsvcid": "$NVMF_PORT", 00:15:03.679 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:03.679 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:03.679 "hdgst": ${hdgst:-false}, 00:15:03.679 "ddgst": ${ddgst:-false} 00:15:03.679 }, 00:15:03.679 "method": "bdev_nvme_attach_controller" 00:15:03.679 } 00:15:03.679 EOF 00:15:03.679 )") 00:15:03.679 20:14:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:03.679 20:14:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:03.679 20:14:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:03.679 20:14:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:03.679 "params": { 00:15:03.679 "name": "Nvme1", 00:15:03.679 "trtype": "tcp", 00:15:03.679 "traddr": "10.0.0.2", 00:15:03.679 "adrfam": "ipv4", 00:15:03.679 "trsvcid": "4420", 00:15:03.679 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:03.679 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:03.679 "hdgst": false, 00:15:03.679 "ddgst": false 00:15:03.679 }, 00:15:03.679 "method": "bdev_nvme_attach_controller" 00:15:03.679 }' 00:15:03.679 [2024-07-14 20:14:52.702785] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:15:03.679 [2024-07-14 20:14:52.702899] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:15:03.679 [2024-07-14 20:14:52.718616] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:15:03.679 [2024-07-14 20:14:52.718935] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:15:03.679 20:14:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 90522 00:15:03.679 [2024-07-14 20:14:52.730915] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:15:03.679 [2024-07-14 20:14:52.730993] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:15:03.679 [2024-07-14 20:14:52.734281] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:15:03.679 [2024-07-14 20:14:52.734360] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:15:03.938 [2024-07-14 20:14:52.912144] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:03.938 [2024-07-14 20:14:52.988746] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:03.938 [2024-07-14 20:14:52.995897] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:15:04.196 [2024-07-14 20:14:53.092124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:15:04.196 [2024-07-14 20:14:53.100623] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:04.196 [2024-07-14 20:14:53.183922] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:15:04.196 [2024-07-14 20:14:53.186936] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:04.196 Running I/O for 1 seconds... 00:15:04.196 Running I/O for 1 seconds... 00:15:04.196 [2024-07-14 20:14:53.265315] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:15:04.454 Running I/O for 1 seconds... 00:15:04.454 Running I/O for 1 seconds... 00:15:05.386 00:15:05.386 Latency(us) 00:15:05.386 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:05.386 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:15:05.386 Nvme1n1 : 1.00 204866.35 800.26 0.00 0.00 622.31 290.44 770.79 00:15:05.386 =================================================================================================================== 00:15:05.386 Total : 204866.35 800.26 0.00 0.00 622.31 290.44 770.79 00:15:05.386 00:15:05.386 Latency(us) 00:15:05.386 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:05.386 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:15:05.386 Nvme1n1 : 1.01 9622.45 37.59 0.00 0.00 13236.90 9115.46 27644.28 00:15:05.386 =================================================================================================================== 00:15:05.386 Total : 9622.45 37.59 0.00 0.00 13236.90 9115.46 27644.28 00:15:05.386 00:15:05.386 Latency(us) 00:15:05.386 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:05.386 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:15:05.386 Nvme1n1 : 1.01 8043.19 31.42 0.00 0.00 15840.81 6196.13 23116.33 00:15:05.386 =================================================================================================================== 00:15:05.386 Total : 8043.19 31.42 0.00 0.00 15840.81 6196.13 23116.33 00:15:05.386 00:15:05.386 Latency(us) 00:15:05.386 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:05.386 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:15:05.386 Nvme1n1 : 1.01 8151.52 31.84 0.00 0.00 15634.46 7685.59 26810.18 00:15:05.386 =================================================================================================================== 00:15:05.386 Total : 8151.52 31.84 0.00 0.00 15634.46 7685.59 26810.18 00:15:05.951 20:14:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 90524 00:15:05.951 20:14:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 90526 00:15:05.951 20:14:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 90529 00:15:05.951 20:14:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:05.951 20:14:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.951 20:14:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:05.951 20:14:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.951 20:14:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:15:05.951 20:14:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:15:05.951 20:14:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:05.951 20:14:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:15:05.951 20:14:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:05.951 20:14:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:15:05.951 20:14:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:05.951 20:14:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:05.951 rmmod nvme_tcp 00:15:05.951 rmmod nvme_fabrics 00:15:05.951 rmmod nvme_keyring 00:15:06.208 20:14:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:06.208 20:14:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:15:06.208 20:14:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:15:06.208 20:14:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 90469 ']' 00:15:06.208 20:14:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 90469 00:15:06.208 20:14:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@946 -- # '[' -z 90469 ']' 00:15:06.208 20:14:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # kill -0 90469 00:15:06.208 20:14:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # uname 00:15:06.208 20:14:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:06.208 20:14:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 90469 00:15:06.208 20:14:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:06.208 20:14:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:06.208 killing process with pid 90469 00:15:06.208 20:14:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # echo 'killing process with pid 90469' 00:15:06.208 20:14:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@965 -- # kill 90469 00:15:06.208 20:14:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # wait 90469 00:15:06.465 20:14:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:06.465 20:14:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:06.465 20:14:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:06.465 20:14:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:06.465 20:14:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:06.465 20:14:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:06.465 20:14:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:06.465 20:14:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:06.465 20:14:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:06.465 00:15:06.465 real 0m4.481s 00:15:06.465 user 0m19.508s 00:15:06.465 sys 0m2.365s 00:15:06.465 20:14:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:06.465 20:14:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:06.465 ************************************ 00:15:06.465 END TEST nvmf_bdev_io_wait 00:15:06.465 ************************************ 00:15:06.465 20:14:55 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:06.465 20:14:55 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:06.465 20:14:55 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:06.465 20:14:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:06.465 ************************************ 00:15:06.465 START TEST nvmf_queue_depth 00:15:06.465 ************************************ 00:15:06.465 20:14:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:06.723 * Looking for test storage... 00:15:06.723 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:06.723 20:14:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:06.723 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:15:06.723 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:06.723 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:06.723 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:06.723 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:06.723 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:06.723 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:06.723 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:06.723 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:06.723 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:06.723 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:06.723 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:15:06.723 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:15:06.723 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:06.723 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:06.723 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:06.723 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:06.723 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:06.723 20:14:55 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:06.723 20:14:55 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:06.723 20:14:55 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:06.723 20:14:55 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.723 20:14:55 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.723 20:14:55 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.723 20:14:55 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:15:06.723 20:14:55 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.723 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:15:06.723 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:06.723 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:06.723 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:06.723 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:06.723 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:06.723 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:06.723 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:06.723 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:06.723 20:14:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:15:06.723 20:14:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:15:06.723 20:14:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:06.723 20:14:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:15:06.723 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:06.723 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:06.723 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:06.723 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:06.723 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:06.723 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:06.723 20:14:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:06.723 20:14:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:06.723 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:06.723 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:06.723 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:06.723 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:06.723 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:06.723 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:06.723 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:06.723 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:06.723 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:06.723 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:06.723 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:06.723 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:06.723 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:06.723 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:06.723 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:06.723 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:06.723 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:06.723 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:06.723 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:06.723 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:06.723 Cannot find device "nvmf_tgt_br" 00:15:06.723 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # true 00:15:06.723 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:06.723 Cannot find device "nvmf_tgt_br2" 00:15:06.723 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # true 00:15:06.723 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:06.723 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:06.723 Cannot find device "nvmf_tgt_br" 00:15:06.723 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # true 00:15:06.723 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:06.723 Cannot find device "nvmf_tgt_br2" 00:15:06.723 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # true 00:15:06.723 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:06.723 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:06.723 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:06.723 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:06.723 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:15:06.723 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:06.723 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:06.723 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:15:06.723 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:06.723 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:06.723 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:06.723 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:06.723 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:06.723 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:06.982 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:06.982 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:06.982 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:06.982 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:06.982 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:06.982 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:06.982 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:06.982 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:06.982 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:06.982 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:06.982 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:06.982 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:06.982 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:06.982 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:06.982 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:06.982 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:06.982 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:06.982 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:06.982 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:06.982 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:15:06.982 00:15:06.982 --- 10.0.0.2 ping statistics --- 00:15:06.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:06.982 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:15:06.982 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:06.982 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:06.982 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.030 ms 00:15:06.982 00:15:06.982 --- 10.0.0.3 ping statistics --- 00:15:06.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:06.982 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:15:06.982 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:06.982 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:06.982 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:15:06.982 00:15:06.982 --- 10.0.0.1 ping statistics --- 00:15:06.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:06.982 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:15:06.982 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:06.982 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@433 -- # return 0 00:15:06.982 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:06.982 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:06.982 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:06.982 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:06.982 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:06.982 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:06.982 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:06.982 20:14:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:15:06.982 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:06.982 20:14:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:06.982 20:14:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:06.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:06.982 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=90765 00:15:06.982 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 90765 00:15:06.982 20:14:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:06.982 20:14:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 90765 ']' 00:15:06.982 20:14:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:06.982 20:14:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:06.982 20:14:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:06.982 20:14:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:06.982 20:14:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:06.982 [2024-07-14 20:14:56.035584] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:15:06.982 [2024-07-14 20:14:56.035676] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:07.240 [2024-07-14 20:14:56.176776] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:07.240 [2024-07-14 20:14:56.319603] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:07.240 [2024-07-14 20:14:56.319670] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:07.240 [2024-07-14 20:14:56.319697] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:07.240 [2024-07-14 20:14:56.319705] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:07.240 [2024-07-14 20:14:56.319712] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:07.240 [2024-07-14 20:14:56.319747] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:08.176 20:14:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:08.176 20:14:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:15:08.176 20:14:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:08.176 20:14:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:08.176 20:14:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:08.176 20:14:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:08.176 20:14:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:08.176 20:14:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.176 20:14:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:08.176 [2024-07-14 20:14:57.054118] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:08.176 20:14:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.176 20:14:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:08.176 20:14:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.176 20:14:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:08.176 Malloc0 00:15:08.176 20:14:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.176 20:14:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:08.176 20:14:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.176 20:14:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:08.176 20:14:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.176 20:14:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:08.176 20:14:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.176 20:14:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:08.176 20:14:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.176 20:14:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:08.176 20:14:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.176 20:14:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:08.176 [2024-07-14 20:14:57.130380] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:08.176 20:14:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.176 20:14:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=90822 00:15:08.176 20:14:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:15:08.176 20:14:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:08.176 20:14:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 90822 /var/tmp/bdevperf.sock 00:15:08.176 20:14:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 90822 ']' 00:15:08.176 20:14:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:08.176 20:14:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:08.176 20:14:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:08.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:08.176 20:14:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:08.176 20:14:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:08.176 [2024-07-14 20:14:57.194159] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:15:08.176 [2024-07-14 20:14:57.194786] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90822 ] 00:15:08.434 [2024-07-14 20:14:57.336345] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:08.434 [2024-07-14 20:14:57.479218] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:09.369 20:14:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:09.369 20:14:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:15:09.369 20:14:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:09.369 20:14:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.369 20:14:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:09.369 NVMe0n1 00:15:09.369 20:14:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.369 20:14:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:09.369 Running I/O for 10 seconds... 00:15:21.570 00:15:21.570 Latency(us) 00:15:21.570 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:21.570 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:15:21.570 Verification LBA range: start 0x0 length 0x4000 00:15:21.570 NVMe0n1 : 10.06 9454.24 36.93 0.00 0.00 107892.43 16324.42 108670.60 00:15:21.570 =================================================================================================================== 00:15:21.570 Total : 9454.24 36.93 0.00 0.00 107892.43 16324.42 108670.60 00:15:21.570 0 00:15:21.571 20:15:08 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 90822 00:15:21.571 20:15:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 90822 ']' 00:15:21.571 20:15:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 90822 00:15:21.571 20:15:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:15:21.571 20:15:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:21.571 20:15:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 90822 00:15:21.571 killing process with pid 90822 00:15:21.571 Received shutdown signal, test time was about 10.000000 seconds 00:15:21.571 00:15:21.571 Latency(us) 00:15:21.571 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:21.571 =================================================================================================================== 00:15:21.571 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:21.571 20:15:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:21.571 20:15:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:21.571 20:15:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 90822' 00:15:21.571 20:15:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 90822 00:15:21.571 20:15:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 90822 00:15:21.571 20:15:08 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:21.571 20:15:08 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:15:21.571 20:15:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:21.571 20:15:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:15:21.571 20:15:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:21.571 20:15:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:15:21.571 20:15:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:21.571 20:15:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:21.571 rmmod nvme_tcp 00:15:21.571 rmmod nvme_fabrics 00:15:21.571 rmmod nvme_keyring 00:15:21.571 20:15:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:21.571 20:15:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:15:21.571 20:15:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:15:21.571 20:15:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 90765 ']' 00:15:21.571 20:15:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 90765 00:15:21.571 20:15:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 90765 ']' 00:15:21.571 20:15:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 90765 00:15:21.571 20:15:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:15:21.571 20:15:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:21.571 20:15:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 90765 00:15:21.571 killing process with pid 90765 00:15:21.571 20:15:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:15:21.571 20:15:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:15:21.571 20:15:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 90765' 00:15:21.571 20:15:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 90765 00:15:21.571 20:15:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 90765 00:15:21.571 20:15:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:21.571 20:15:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:21.571 20:15:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:21.571 20:15:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:21.571 20:15:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:21.571 20:15:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:21.571 20:15:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:21.571 20:15:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:21.571 20:15:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:21.571 00:15:21.571 real 0m13.813s 00:15:21.571 user 0m23.447s 00:15:21.571 sys 0m2.308s 00:15:21.571 ************************************ 00:15:21.571 END TEST nvmf_queue_depth 00:15:21.571 ************************************ 00:15:21.571 20:15:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:21.571 20:15:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:21.571 20:15:09 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:21.571 20:15:09 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:21.571 20:15:09 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:21.571 20:15:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:21.571 ************************************ 00:15:21.571 START TEST nvmf_target_multipath 00:15:21.571 ************************************ 00:15:21.571 20:15:09 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:21.571 * Looking for test storage... 00:15:21.571 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:21.571 20:15:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:21.571 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:15:21.571 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:21.571 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:21.571 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:21.571 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:21.571 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:21.571 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:21.571 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:21.571 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:21.571 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:21.571 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:21.571 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:15:21.571 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:15:21.571 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:21.571 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:21.571 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:21.571 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:21.571 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:21.571 20:15:09 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:21.571 20:15:09 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:21.571 20:15:09 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:21.571 20:15:09 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.571 20:15:09 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.571 20:15:09 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.571 20:15:09 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:15:21.571 20:15:09 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.571 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:15:21.571 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:21.571 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:21.571 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:21.571 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:21.571 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:21.571 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:21.571 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:21.571 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:21.571 20:15:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:21.571 20:15:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:21.571 20:15:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:15:21.571 20:15:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:21.571 20:15:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:15:21.571 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:21.571 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:21.571 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:21.571 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:21.571 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:21.571 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:21.572 20:15:09 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:21.572 20:15:09 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:21.572 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:21.572 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:21.572 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:21.572 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:21.572 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:21.572 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:21.572 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:21.572 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:21.572 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:21.572 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:21.572 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:21.572 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:21.572 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:21.572 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:21.572 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:21.572 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:21.572 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:21.572 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:21.572 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:21.572 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:21.572 Cannot find device "nvmf_tgt_br" 00:15:21.572 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # true 00:15:21.572 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:21.572 Cannot find device "nvmf_tgt_br2" 00:15:21.572 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # true 00:15:21.572 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:21.572 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:21.572 Cannot find device "nvmf_tgt_br" 00:15:21.572 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # true 00:15:21.572 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:21.572 Cannot find device "nvmf_tgt_br2" 00:15:21.572 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # true 00:15:21.572 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:21.572 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:21.572 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:21.572 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:21.572 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:15:21.572 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:21.572 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:21.572 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:15:21.572 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:21.572 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:21.572 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:21.572 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:21.572 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:21.572 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:21.572 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:21.572 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:21.572 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:21.572 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:21.572 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:21.572 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:21.572 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:21.572 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:21.572 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:21.572 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:21.572 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:21.572 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:21.572 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:21.572 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:21.572 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:21.572 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:21.572 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:21.572 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:21.572 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:21.572 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:15:21.572 00:15:21.572 --- 10.0.0.2 ping statistics --- 00:15:21.572 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:21.572 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:15:21.572 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:21.572 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:21.572 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:15:21.572 00:15:21.572 --- 10.0.0.3 ping statistics --- 00:15:21.572 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:21.572 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:15:21.572 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:21.572 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:21.572 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:15:21.572 00:15:21.572 --- 10.0.0.1 ping statistics --- 00:15:21.572 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:21.572 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:15:21.572 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:21.572 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@433 -- # return 0 00:15:21.572 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:21.572 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:21.572 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:21.572 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:21.572 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:21.572 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:21.572 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:21.572 20:15:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:15:21.572 20:15:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:15:21.572 20:15:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:15:21.572 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:21.572 20:15:09 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:21.572 20:15:09 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:15:21.572 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@481 -- # nvmfpid=91153 00:15:21.572 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@482 -- # waitforlisten 91153 00:15:21.572 20:15:09 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@827 -- # '[' -z 91153 ']' 00:15:21.572 20:15:09 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:21.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:21.572 20:15:09 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:21.572 20:15:09 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:21.572 20:15:09 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:21.572 20:15:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:21.572 20:15:09 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:15:21.572 [2024-07-14 20:15:09.912720] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:15:21.572 [2024-07-14 20:15:09.912817] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:21.572 [2024-07-14 20:15:10.050389] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:21.572 [2024-07-14 20:15:10.145450] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:21.572 [2024-07-14 20:15:10.145792] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:21.572 [2024-07-14 20:15:10.146039] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:21.572 [2024-07-14 20:15:10.146051] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:21.572 [2024-07-14 20:15:10.146058] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:21.572 [2024-07-14 20:15:10.146156] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:21.572 [2024-07-14 20:15:10.146388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:21.572 [2024-07-14 20:15:10.146991] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:21.572 [2024-07-14 20:15:10.147007] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:21.831 20:15:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:21.831 20:15:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@860 -- # return 0 00:15:21.831 20:15:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:21.831 20:15:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:21.831 20:15:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:15:21.831 20:15:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:21.831 20:15:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:22.090 [2024-07-14 20:15:11.133885] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:22.349 20:15:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:15:22.607 Malloc0 00:15:22.607 20:15:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:15:22.866 20:15:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:22.866 20:15:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:23.449 [2024-07-14 20:15:12.211509] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:23.449 20:15:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:23.449 [2024-07-14 20:15:12.435823] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:23.449 20:15:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid=caa3dfc1-79db-49e7-95fe-b9f6785698c4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:15:23.723 20:15:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid=caa3dfc1-79db-49e7-95fe-b9f6785698c4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:15:23.982 20:15:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:15:23.982 20:15:12 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1194 -- # local i=0 00:15:23.982 20:15:12 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:15:23.982 20:15:12 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:15:23.982 20:15:12 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1201 -- # sleep 2 00:15:25.881 20:15:14 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:15:25.881 20:15:14 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:15:25.881 20:15:14 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:15:25.881 20:15:14 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:15:25.881 20:15:14 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:15:25.881 20:15:14 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # return 0 00:15:25.881 20:15:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:15:25.881 20:15:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:15:25.881 20:15:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:15:25.881 20:15:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:15:25.881 20:15:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:15:25.881 20:15:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:15:25.881 20:15:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:15:25.881 20:15:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:15:25.881 20:15:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:15:25.881 20:15:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:15:25.881 20:15:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:15:25.881 20:15:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:15:25.881 20:15:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:15:25.881 20:15:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:15:25.881 20:15:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:15:25.881 20:15:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:15:25.881 20:15:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:25.881 20:15:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:25.881 20:15:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:25.881 20:15:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:15:25.881 20:15:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:15:25.881 20:15:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:15:25.881 20:15:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:25.881 20:15:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:25.881 20:15:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:25.881 20:15:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:15:25.881 20:15:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=91292 00:15:25.881 20:15:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:15:25.881 20:15:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:15:25.881 [global] 00:15:25.881 thread=1 00:15:25.881 invalidate=1 00:15:25.881 rw=randrw 00:15:25.881 time_based=1 00:15:25.881 runtime=6 00:15:25.881 ioengine=libaio 00:15:25.881 direct=1 00:15:25.881 bs=4096 00:15:25.881 iodepth=128 00:15:25.881 norandommap=0 00:15:25.881 numjobs=1 00:15:25.881 00:15:25.881 verify_dump=1 00:15:25.881 verify_backlog=512 00:15:25.881 verify_state_save=0 00:15:25.881 do_verify=1 00:15:25.881 verify=crc32c-intel 00:15:25.881 [job0] 00:15:25.881 filename=/dev/nvme0n1 00:15:26.139 Could not set queue depth (nvme0n1) 00:15:26.139 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:26.139 fio-3.35 00:15:26.139 Starting 1 thread 00:15:27.075 20:15:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:15:27.333 20:15:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:15:27.590 20:15:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:15:27.590 20:15:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:15:27.590 20:15:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:15:27.590 20:15:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:27.590 20:15:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:27.590 20:15:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:27.590 20:15:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:15:27.590 20:15:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:15:27.590 20:15:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:15:27.591 20:15:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:27.591 20:15:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:27.591 20:15:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:27.591 20:15:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:15:28.524 20:15:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:28.524 20:15:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:28.524 20:15:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:28.524 20:15:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:28.781 20:15:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:15:29.039 20:15:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:15:29.039 20:15:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:15:29.039 20:15:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:15:29.039 20:15:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:29.039 20:15:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:29.039 20:15:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:29.039 20:15:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:15:29.039 20:15:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:15:29.039 20:15:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:15:29.039 20:15:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:29.039 20:15:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:29.039 20:15:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:29.039 20:15:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:15:29.971 20:15:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:29.971 20:15:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:29.971 20:15:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:29.971 20:15:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 91292 00:15:32.501 00:15:32.501 job0: (groupid=0, jobs=1): err= 0: pid=91313: Sun Jul 14 20:15:21 2024 00:15:32.501 read: IOPS=10.8k, BW=42.1MiB/s (44.1MB/s)(253MiB/6006msec) 00:15:32.501 slat (usec): min=2, max=5498, avg=53.18, stdev=240.03 00:15:32.501 clat (usec): min=679, max=15110, avg=8095.08, stdev=1300.95 00:15:32.501 lat (usec): min=866, max=15119, avg=8148.26, stdev=1310.82 00:15:32.501 clat percentiles (usec): 00:15:32.501 | 1.00th=[ 4948], 5.00th=[ 6259], 10.00th=[ 6783], 20.00th=[ 7177], 00:15:32.501 | 30.00th=[ 7439], 40.00th=[ 7701], 50.00th=[ 7963], 60.00th=[ 8291], 00:15:32.501 | 70.00th=[ 8586], 80.00th=[ 8979], 90.00th=[ 9634], 95.00th=[10421], 00:15:32.501 | 99.00th=[12125], 99.50th=[12649], 99.90th=[13304], 99.95th=[13566], 00:15:32.501 | 99.99th=[14484] 00:15:32.501 bw ( KiB/s): min= 9608, max=29392, per=52.31%, avg=22535.27, stdev=5653.31, samples=11 00:15:32.501 iops : min= 2402, max= 7348, avg=5633.82, stdev=1413.33, samples=11 00:15:32.501 write: IOPS=6255, BW=24.4MiB/s (25.6MB/s)(133MiB/5429msec); 0 zone resets 00:15:32.501 slat (usec): min=4, max=5873, avg=65.68, stdev=171.07 00:15:32.501 clat (usec): min=843, max=13697, avg=7012.66, stdev=1117.42 00:15:32.501 lat (usec): min=895, max=13728, avg=7078.35, stdev=1121.82 00:15:32.501 clat percentiles (usec): 00:15:32.501 | 1.00th=[ 3949], 5.00th=[ 5145], 10.00th=[ 5866], 20.00th=[ 6325], 00:15:32.501 | 30.00th=[ 6587], 40.00th=[ 6849], 50.00th=[ 7046], 60.00th=[ 7242], 00:15:32.501 | 70.00th=[ 7439], 80.00th=[ 7701], 90.00th=[ 8094], 95.00th=[ 8586], 00:15:32.501 | 99.00th=[10683], 99.50th=[11338], 99.90th=[12649], 99.95th=[12911], 00:15:32.501 | 99.99th=[13304] 00:15:32.501 bw ( KiB/s): min= 9776, max=28880, per=90.16%, avg=22560.00, stdev=5306.44, samples=11 00:15:32.501 iops : min= 2444, max= 7220, avg=5640.00, stdev=1326.61, samples=11 00:15:32.501 lat (usec) : 750=0.01%, 1000=0.01% 00:15:32.501 lat (msec) : 2=0.01%, 4=0.50%, 10=94.08%, 20=5.41% 00:15:32.501 cpu : usr=5.58%, sys=22.23%, ctx=6191, majf=0, minf=108 00:15:32.501 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:15:32.501 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:32.501 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:32.501 issued rwts: total=64689,33962,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:32.501 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:32.501 00:15:32.501 Run status group 0 (all jobs): 00:15:32.501 READ: bw=42.1MiB/s (44.1MB/s), 42.1MiB/s-42.1MiB/s (44.1MB/s-44.1MB/s), io=253MiB (265MB), run=6006-6006msec 00:15:32.501 WRITE: bw=24.4MiB/s (25.6MB/s), 24.4MiB/s-24.4MiB/s (25.6MB/s-25.6MB/s), io=133MiB (139MB), run=5429-5429msec 00:15:32.501 00:15:32.501 Disk stats (read/write): 00:15:32.501 nvme0n1: ios=63715/33318, merge=0/0, ticks=482934/218112, in_queue=701046, util=98.66% 00:15:32.501 20:15:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:15:32.501 20:15:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:15:33.067 20:15:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:15:33.067 20:15:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:15:33.067 20:15:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:15:33.067 20:15:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:33.067 20:15:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:33.067 20:15:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:33.067 20:15:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:15:33.067 20:15:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:15:33.067 20:15:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:15:33.067 20:15:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:33.067 20:15:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:33.067 20:15:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:15:33.067 20:15:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:15:34.002 20:15:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:34.002 20:15:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:34.002 20:15:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:34.002 20:15:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:15:34.002 20:15:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=91445 00:15:34.002 20:15:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:15:34.002 20:15:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:15:34.002 [global] 00:15:34.002 thread=1 00:15:34.002 invalidate=1 00:15:34.002 rw=randrw 00:15:34.002 time_based=1 00:15:34.002 runtime=6 00:15:34.002 ioengine=libaio 00:15:34.002 direct=1 00:15:34.002 bs=4096 00:15:34.002 iodepth=128 00:15:34.002 norandommap=0 00:15:34.002 numjobs=1 00:15:34.002 00:15:34.002 verify_dump=1 00:15:34.002 verify_backlog=512 00:15:34.002 verify_state_save=0 00:15:34.002 do_verify=1 00:15:34.002 verify=crc32c-intel 00:15:34.002 [job0] 00:15:34.002 filename=/dev/nvme0n1 00:15:34.002 Could not set queue depth (nvme0n1) 00:15:34.002 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:34.002 fio-3.35 00:15:34.002 Starting 1 thread 00:15:34.936 20:15:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:15:35.194 20:15:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:15:35.452 20:15:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:15:35.452 20:15:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:15:35.452 20:15:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:15:35.452 20:15:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:35.452 20:15:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:35.452 20:15:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:35.452 20:15:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:15:35.452 20:15:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:15:35.452 20:15:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:15:35.452 20:15:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:35.452 20:15:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:35.452 20:15:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:35.452 20:15:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:15:36.386 20:15:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:36.386 20:15:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:36.386 20:15:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:36.386 20:15:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:36.645 20:15:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:15:36.903 20:15:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:15:36.903 20:15:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:15:36.903 20:15:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:15:36.903 20:15:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:36.903 20:15:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:36.903 20:15:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:36.903 20:15:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:15:36.903 20:15:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:15:36.903 20:15:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:15:36.903 20:15:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:36.903 20:15:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:36.903 20:15:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:36.903 20:15:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:15:38.277 20:15:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:38.277 20:15:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:38.277 20:15:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:38.277 20:15:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 91445 00:15:40.178 00:15:40.178 job0: (groupid=0, jobs=1): err= 0: pid=91466: Sun Jul 14 20:15:29 2024 00:15:40.178 read: IOPS=11.5k, BW=45.1MiB/s (47.3MB/s)(271MiB/6004msec) 00:15:40.178 slat (usec): min=6, max=7737, avg=42.22, stdev=205.91 00:15:40.178 clat (usec): min=341, max=18905, avg=7560.41, stdev=1865.17 00:15:40.178 lat (usec): min=358, max=18930, avg=7602.63, stdev=1877.02 00:15:40.178 clat percentiles (usec): 00:15:40.178 | 1.00th=[ 2933], 5.00th=[ 4555], 10.00th=[ 5145], 20.00th=[ 6194], 00:15:40.178 | 30.00th=[ 6980], 40.00th=[ 7242], 50.00th=[ 7570], 60.00th=[ 7898], 00:15:40.178 | 70.00th=[ 8291], 80.00th=[ 8717], 90.00th=[ 9503], 95.00th=[10683], 00:15:40.179 | 99.00th=[12911], 99.50th=[14091], 99.90th=[16909], 99.95th=[17695], 00:15:40.179 | 99.99th=[18482] 00:15:40.179 bw ( KiB/s): min=14680, max=35736, per=55.05%, avg=25427.64, stdev=6082.61, samples=11 00:15:40.179 iops : min= 3670, max= 8934, avg=6356.91, stdev=1520.65, samples=11 00:15:40.179 write: IOPS=6785, BW=26.5MiB/s (27.8MB/s)(146MiB/5504msec); 0 zone resets 00:15:40.179 slat (usec): min=11, max=2792, avg=54.81, stdev=142.05 00:15:40.179 clat (usec): min=779, max=16293, avg=6372.44, stdev=1563.79 00:15:40.179 lat (usec): min=816, max=16360, avg=6427.25, stdev=1574.29 00:15:40.179 clat percentiles (usec): 00:15:40.179 | 1.00th=[ 2704], 5.00th=[ 3589], 10.00th=[ 4047], 20.00th=[ 4948], 00:15:40.179 | 30.00th=[ 5932], 40.00th=[ 6390], 50.00th=[ 6652], 60.00th=[ 6915], 00:15:40.179 | 70.00th=[ 7111], 80.00th=[ 7439], 90.00th=[ 7898], 95.00th=[ 8455], 00:15:40.179 | 99.00th=[10552], 99.50th=[11207], 99.90th=[13042], 99.95th=[14615], 00:15:40.179 | 99.99th=[15795] 00:15:40.179 bw ( KiB/s): min=14968, max=36600, per=93.36%, avg=25337.45, stdev=6017.15, samples=11 00:15:40.179 iops : min= 3742, max= 9150, avg=6334.36, stdev=1504.29, samples=11 00:15:40.179 lat (usec) : 500=0.01%, 750=0.02%, 1000=0.02% 00:15:40.179 lat (msec) : 2=0.21%, 4=4.58%, 10=89.62%, 20=5.54% 00:15:40.179 cpu : usr=6.20%, sys=22.67%, ctx=6743, majf=0, minf=151 00:15:40.179 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:15:40.179 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:40.179 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:40.179 issued rwts: total=69330,37345,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:40.179 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:40.179 00:15:40.179 Run status group 0 (all jobs): 00:15:40.179 READ: bw=45.1MiB/s (47.3MB/s), 45.1MiB/s-45.1MiB/s (47.3MB/s-47.3MB/s), io=271MiB (284MB), run=6004-6004msec 00:15:40.179 WRITE: bw=26.5MiB/s (27.8MB/s), 26.5MiB/s-26.5MiB/s (27.8MB/s-27.8MB/s), io=146MiB (153MB), run=5504-5504msec 00:15:40.179 00:15:40.179 Disk stats (read/write): 00:15:40.179 nvme0n1: ios=67689/37345, merge=0/0, ticks=480427/221841, in_queue=702268, util=98.68% 00:15:40.179 20:15:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:40.179 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:15:40.179 20:15:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:40.179 20:15:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1215 -- # local i=0 00:15:40.179 20:15:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:15:40.179 20:15:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:40.437 20:15:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:15:40.437 20:15:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:40.437 20:15:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # return 0 00:15:40.437 20:15:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:40.696 20:15:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:15:40.696 20:15:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:15:40.696 20:15:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:15:40.696 20:15:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:15:40.696 20:15:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:40.696 20:15:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:15:40.696 20:15:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:40.696 20:15:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:15:40.696 20:15:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:40.696 20:15:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:40.696 rmmod nvme_tcp 00:15:40.696 rmmod nvme_fabrics 00:15:40.696 rmmod nvme_keyring 00:15:40.696 20:15:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:40.696 20:15:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:15:40.696 20:15:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:15:40.696 20:15:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n 91153 ']' 00:15:40.696 20:15:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@490 -- # killprocess 91153 00:15:40.696 20:15:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@946 -- # '[' -z 91153 ']' 00:15:40.696 20:15:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@950 -- # kill -0 91153 00:15:40.696 20:15:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@951 -- # uname 00:15:40.696 20:15:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:40.696 20:15:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 91153 00:15:40.696 killing process with pid 91153 00:15:40.696 20:15:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:40.696 20:15:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:40.696 20:15:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@964 -- # echo 'killing process with pid 91153' 00:15:40.696 20:15:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@965 -- # kill 91153 00:15:40.696 20:15:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@970 -- # wait 91153 00:15:41.264 20:15:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:41.264 20:15:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:41.264 20:15:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:41.264 20:15:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:41.264 20:15:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:41.264 20:15:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:41.264 20:15:30 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:41.264 20:15:30 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:41.264 20:15:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:41.264 00:15:41.264 real 0m20.794s 00:15:41.264 user 1m20.846s 00:15:41.264 sys 0m6.604s 00:15:41.264 20:15:30 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:41.264 ************************************ 00:15:41.264 END TEST nvmf_target_multipath 00:15:41.264 ************************************ 00:15:41.264 20:15:30 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:15:41.264 20:15:30 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:41.264 20:15:30 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:41.264 20:15:30 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:41.264 20:15:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:41.264 ************************************ 00:15:41.264 START TEST nvmf_zcopy 00:15:41.264 ************************************ 00:15:41.264 20:15:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:41.264 * Looking for test storage... 00:15:41.264 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:41.264 20:15:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:41.264 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:15:41.264 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:41.264 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:41.264 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:41.264 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:41.264 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:41.264 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:41.264 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:41.264 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:41.264 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:41.264 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:41.264 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:15:41.264 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:15:41.264 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:41.264 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:41.264 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:41.264 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:41.264 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:41.264 20:15:30 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:41.264 20:15:30 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:41.264 20:15:30 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:41.264 20:15:30 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.264 20:15:30 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.264 20:15:30 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.264 20:15:30 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:15:41.264 20:15:30 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.264 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:15:41.264 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:41.264 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:41.264 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:41.264 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:41.264 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:41.264 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:41.264 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:41.264 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:41.264 20:15:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:15:41.264 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:41.264 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:41.264 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:41.264 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:41.264 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:41.264 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:41.264 20:15:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:41.264 20:15:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:41.264 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:41.264 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:41.264 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:41.264 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:41.264 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:41.264 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:41.264 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:41.264 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:41.264 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:41.264 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:41.264 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:41.264 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:41.264 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:41.264 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:41.264 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:41.264 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:41.264 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:41.264 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:41.264 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:41.523 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:41.523 Cannot find device "nvmf_tgt_br" 00:15:41.523 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # true 00:15:41.523 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:41.523 Cannot find device "nvmf_tgt_br2" 00:15:41.523 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # true 00:15:41.523 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:41.523 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:41.523 Cannot find device "nvmf_tgt_br" 00:15:41.523 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # true 00:15:41.523 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:41.523 Cannot find device "nvmf_tgt_br2" 00:15:41.523 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # true 00:15:41.523 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:41.523 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:41.523 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:41.523 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:41.523 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:15:41.523 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:41.523 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:41.523 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:15:41.523 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:41.523 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:41.523 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:41.523 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:41.523 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:41.523 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:41.523 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:41.523 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:41.523 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:41.523 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:41.523 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:41.523 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:41.523 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:41.523 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:41.523 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:41.783 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:41.783 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:41.783 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:41.783 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:41.783 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:41.783 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:41.783 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:41.783 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:41.783 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:41.783 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:41.783 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.098 ms 00:15:41.783 00:15:41.783 --- 10.0.0.2 ping statistics --- 00:15:41.783 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:41.783 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:15:41.783 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:41.783 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:41.783 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 00:15:41.783 00:15:41.783 --- 10.0.0.3 ping statistics --- 00:15:41.783 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:41.783 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:15:41.783 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:41.783 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:41.783 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:15:41.783 00:15:41.783 --- 10.0.0.1 ping statistics --- 00:15:41.783 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:41.783 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:15:41.783 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:41.783 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@433 -- # return 0 00:15:41.783 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:41.783 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:41.783 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:41.783 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:41.783 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:41.783 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:41.783 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:41.783 20:15:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:15:41.783 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:41.783 20:15:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:41.783 20:15:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:41.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:41.783 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=91743 00:15:41.783 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:41.783 20:15:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 91743 00:15:41.783 20:15:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@827 -- # '[' -z 91743 ']' 00:15:41.783 20:15:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:41.783 20:15:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:41.783 20:15:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:41.783 20:15:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:41.783 20:15:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:41.783 [2024-07-14 20:15:30.772529] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:15:41.783 [2024-07-14 20:15:30.772872] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:42.050 [2024-07-14 20:15:30.914846] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:42.050 [2024-07-14 20:15:31.037598] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:42.050 [2024-07-14 20:15:31.037995] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:42.050 [2024-07-14 20:15:31.038156] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:42.050 [2024-07-14 20:15:31.038282] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:42.050 [2024-07-14 20:15:31.038315] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:42.050 [2024-07-14 20:15:31.038441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:42.988 20:15:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:42.988 20:15:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@860 -- # return 0 00:15:42.988 20:15:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:42.988 20:15:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:42.988 20:15:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:42.988 20:15:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:42.988 20:15:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:15:42.988 20:15:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:15:42.988 20:15:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.988 20:15:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:42.988 [2024-07-14 20:15:31.789711] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:42.988 20:15:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.988 20:15:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:42.988 20:15:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.988 20:15:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:42.988 20:15:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.988 20:15:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:42.988 20:15:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.988 20:15:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:42.988 [2024-07-14 20:15:31.805810] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:42.988 20:15:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.988 20:15:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:42.988 20:15:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.988 20:15:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:42.988 20:15:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.988 20:15:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:15:42.988 20:15:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.988 20:15:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:42.988 malloc0 00:15:42.988 20:15:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.988 20:15:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:42.988 20:15:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.988 20:15:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:42.989 20:15:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.989 20:15:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:15:42.989 20:15:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:15:42.989 20:15:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:15:42.989 20:15:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:15:42.989 20:15:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:42.989 20:15:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:42.989 { 00:15:42.989 "params": { 00:15:42.989 "name": "Nvme$subsystem", 00:15:42.989 "trtype": "$TEST_TRANSPORT", 00:15:42.989 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:42.989 "adrfam": "ipv4", 00:15:42.989 "trsvcid": "$NVMF_PORT", 00:15:42.989 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:42.989 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:42.989 "hdgst": ${hdgst:-false}, 00:15:42.989 "ddgst": ${ddgst:-false} 00:15:42.989 }, 00:15:42.989 "method": "bdev_nvme_attach_controller" 00:15:42.989 } 00:15:42.989 EOF 00:15:42.989 )") 00:15:42.989 20:15:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:15:42.989 20:15:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:15:42.989 20:15:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:15:42.989 20:15:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:42.989 "params": { 00:15:42.989 "name": "Nvme1", 00:15:42.989 "trtype": "tcp", 00:15:42.989 "traddr": "10.0.0.2", 00:15:42.989 "adrfam": "ipv4", 00:15:42.989 "trsvcid": "4420", 00:15:42.989 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:42.989 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:42.989 "hdgst": false, 00:15:42.989 "ddgst": false 00:15:42.989 }, 00:15:42.989 "method": "bdev_nvme_attach_controller" 00:15:42.989 }' 00:15:42.989 [2024-07-14 20:15:31.912533] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:15:42.989 [2024-07-14 20:15:31.913425] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91797 ] 00:15:42.989 [2024-07-14 20:15:32.054867] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:43.246 [2024-07-14 20:15:32.174664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:43.504 Running I/O for 10 seconds... 00:15:53.495 00:15:53.495 Latency(us) 00:15:53.495 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:53.495 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:15:53.495 Verification LBA range: start 0x0 length 0x1000 00:15:53.495 Nvme1n1 : 10.01 6840.67 53.44 0.00 0.00 18655.15 558.55 29789.09 00:15:53.495 =================================================================================================================== 00:15:53.495 Total : 6840.67 53.44 0.00 0.00 18655.15 558.55 29789.09 00:15:53.755 20:15:42 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=91912 00:15:53.755 20:15:42 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:15:53.755 20:15:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:53.755 20:15:42 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:15:53.755 20:15:42 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:15:53.755 20:15:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:15:53.755 20:15:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:15:53.755 20:15:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:53.755 20:15:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:53.755 { 00:15:53.755 "params": { 00:15:53.755 "name": "Nvme$subsystem", 00:15:53.755 "trtype": "$TEST_TRANSPORT", 00:15:53.755 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:53.755 "adrfam": "ipv4", 00:15:53.755 "trsvcid": "$NVMF_PORT", 00:15:53.755 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:53.755 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:53.755 "hdgst": ${hdgst:-false}, 00:15:53.755 "ddgst": ${ddgst:-false} 00:15:53.755 }, 00:15:53.755 "method": "bdev_nvme_attach_controller" 00:15:53.755 } 00:15:53.755 EOF 00:15:53.755 )") 00:15:53.755 20:15:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:15:53.755 [2024-07-14 20:15:42.717829] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.755 [2024-07-14 20:15:42.717908] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.755 20:15:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:15:53.755 2024/07/14 20:15:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.755 20:15:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:15:53.755 20:15:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:53.755 "params": { 00:15:53.755 "name": "Nvme1", 00:15:53.755 "trtype": "tcp", 00:15:53.755 "traddr": "10.0.0.2", 00:15:53.755 "adrfam": "ipv4", 00:15:53.755 "trsvcid": "4420", 00:15:53.755 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:53.755 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:53.755 "hdgst": false, 00:15:53.755 "ddgst": false 00:15:53.755 }, 00:15:53.755 "method": "bdev_nvme_attach_controller" 00:15:53.755 }' 00:15:53.755 [2024-07-14 20:15:42.729770] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.755 [2024-07-14 20:15:42.729797] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.755 2024/07/14 20:15:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.755 [2024-07-14 20:15:42.741761] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.755 [2024-07-14 20:15:42.741786] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.755 2024/07/14 20:15:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.755 [2024-07-14 20:15:42.753760] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.755 [2024-07-14 20:15:42.753785] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.755 2024/07/14 20:15:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.755 [2024-07-14 20:15:42.765766] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.755 [2024-07-14 20:15:42.765792] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.755 2024/07/14 20:15:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.755 [2024-07-14 20:15:42.772287] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:15:53.755 [2024-07-14 20:15:42.772376] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91912 ] 00:15:53.755 [2024-07-14 20:15:42.777770] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.755 [2024-07-14 20:15:42.777796] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.755 2024/07/14 20:15:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.755 [2024-07-14 20:15:42.789768] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.755 [2024-07-14 20:15:42.789792] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.755 2024/07/14 20:15:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.755 [2024-07-14 20:15:42.801771] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.755 [2024-07-14 20:15:42.801794] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.755 2024/07/14 20:15:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.755 [2024-07-14 20:15:42.813773] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.755 [2024-07-14 20:15:42.813799] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.755 2024/07/14 20:15:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.755 [2024-07-14 20:15:42.825775] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.755 [2024-07-14 20:15:42.825799] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.755 2024/07/14 20:15:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.755 [2024-07-14 20:15:42.837793] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.755 [2024-07-14 20:15:42.837818] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.015 2024/07/14 20:15:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.015 [2024-07-14 20:15:42.849807] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.015 [2024-07-14 20:15:42.849836] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.015 2024/07/14 20:15:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.015 [2024-07-14 20:15:42.861780] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.015 [2024-07-14 20:15:42.861805] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.015 2024/07/14 20:15:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.015 [2024-07-14 20:15:42.873780] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.015 [2024-07-14 20:15:42.873805] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.015 2024/07/14 20:15:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.015 [2024-07-14 20:15:42.885786] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.015 [2024-07-14 20:15:42.885811] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.015 2024/07/14 20:15:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.015 [2024-07-14 20:15:42.897790] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.015 [2024-07-14 20:15:42.897814] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.015 2024/07/14 20:15:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.015 [2024-07-14 20:15:42.909791] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.015 [2024-07-14 20:15:42.909816] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.015 [2024-07-14 20:15:42.912298] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:54.015 2024/07/14 20:15:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.015 [2024-07-14 20:15:42.921803] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.015 [2024-07-14 20:15:42.921831] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.015 2024/07/14 20:15:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.015 [2024-07-14 20:15:42.933801] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.015 [2024-07-14 20:15:42.933827] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.015 2024/07/14 20:15:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.015 [2024-07-14 20:15:42.945802] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.015 [2024-07-14 20:15:42.945827] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.015 2024/07/14 20:15:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.015 [2024-07-14 20:15:42.957800] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.015 [2024-07-14 20:15:42.957824] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.015 2024/07/14 20:15:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.015 [2024-07-14 20:15:42.969804] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.015 [2024-07-14 20:15:42.969829] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.016 2024/07/14 20:15:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.016 [2024-07-14 20:15:42.981816] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.016 [2024-07-14 20:15:42.981845] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.016 2024/07/14 20:15:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.016 [2024-07-14 20:15:42.993813] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.016 [2024-07-14 20:15:42.993838] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.016 2024/07/14 20:15:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.016 [2024-07-14 20:15:43.005817] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.016 [2024-07-14 20:15:43.005842] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.016 2024/07/14 20:15:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.016 [2024-07-14 20:15:43.017818] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.016 [2024-07-14 20:15:43.017843] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.016 2024/07/14 20:15:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.016 [2024-07-14 20:15:43.024116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:54.016 [2024-07-14 20:15:43.029829] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.016 [2024-07-14 20:15:43.029881] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.016 2024/07/14 20:15:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.016 [2024-07-14 20:15:43.041835] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.016 [2024-07-14 20:15:43.041888] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.016 2024/07/14 20:15:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.016 [2024-07-14 20:15:43.053851] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.016 [2024-07-14 20:15:43.053903] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.016 2024/07/14 20:15:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.016 [2024-07-14 20:15:43.065860] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.016 [2024-07-14 20:15:43.065905] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.016 2024/07/14 20:15:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.016 [2024-07-14 20:15:43.077855] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.016 [2024-07-14 20:15:43.077913] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.016 2024/07/14 20:15:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.016 [2024-07-14 20:15:43.089863] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.016 [2024-07-14 20:15:43.089905] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.016 2024/07/14 20:15:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.277 [2024-07-14 20:15:43.101881] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.278 [2024-07-14 20:15:43.101913] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.278 2024/07/14 20:15:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.278 [2024-07-14 20:15:43.113899] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.278 [2024-07-14 20:15:43.113938] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.278 2024/07/14 20:15:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.278 [2024-07-14 20:15:43.125921] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.278 [2024-07-14 20:15:43.125953] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.278 2024/07/14 20:15:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.278 [2024-07-14 20:15:43.137908] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.278 [2024-07-14 20:15:43.137942] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.278 2024/07/14 20:15:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.278 [2024-07-14 20:15:43.149910] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.278 [2024-07-14 20:15:43.149936] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.278 2024/07/14 20:15:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.278 [2024-07-14 20:15:43.161911] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.278 [2024-07-14 20:15:43.161942] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.278 2024/07/14 20:15:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.278 [2024-07-14 20:15:43.173908] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.278 [2024-07-14 20:15:43.173940] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.278 2024/07/14 20:15:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.278 [2024-07-14 20:15:43.185904] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.278 [2024-07-14 20:15:43.185935] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.278 2024/07/14 20:15:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.278 [2024-07-14 20:15:43.197898] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.278 [2024-07-14 20:15:43.197927] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.278 2024/07/14 20:15:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.278 [2024-07-14 20:15:43.209909] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.278 [2024-07-14 20:15:43.209952] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.278 2024/07/14 20:15:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.278 [2024-07-14 20:15:43.221906] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.278 [2024-07-14 20:15:43.221934] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.278 2024/07/14 20:15:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.278 [2024-07-14 20:15:43.233920] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.278 [2024-07-14 20:15:43.233950] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.278 2024/07/14 20:15:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.278 Running I/O for 5 seconds... 00:15:54.278 [2024-07-14 20:15:43.245909] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.278 [2024-07-14 20:15:43.245933] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.278 2024/07/14 20:15:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.278 [2024-07-14 20:15:43.263560] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.278 [2024-07-14 20:15:43.263593] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.278 2024/07/14 20:15:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.278 [2024-07-14 20:15:43.280538] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.278 [2024-07-14 20:15:43.280570] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.278 2024/07/14 20:15:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.278 [2024-07-14 20:15:43.296351] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.278 [2024-07-14 20:15:43.296383] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.278 2024/07/14 20:15:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.279 [2024-07-14 20:15:43.308439] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.279 [2024-07-14 20:15:43.308471] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.279 2024/07/14 20:15:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.279 [2024-07-14 20:15:43.324460] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.279 [2024-07-14 20:15:43.324493] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.279 2024/07/14 20:15:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.279 [2024-07-14 20:15:43.340242] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.279 [2024-07-14 20:15:43.340291] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.279 2024/07/14 20:15:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.279 [2024-07-14 20:15:43.357583] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.279 [2024-07-14 20:15:43.357616] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.539 2024/07/14 20:15:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.539 [2024-07-14 20:15:43.372668] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.539 [2024-07-14 20:15:43.372701] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.539 2024/07/14 20:15:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.539 [2024-07-14 20:15:43.388396] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.539 [2024-07-14 20:15:43.388428] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.539 2024/07/14 20:15:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.539 [2024-07-14 20:15:43.405266] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.539 [2024-07-14 20:15:43.405298] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.539 2024/07/14 20:15:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.539 [2024-07-14 20:15:43.422157] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.539 [2024-07-14 20:15:43.422191] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.539 2024/07/14 20:15:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.539 [2024-07-14 20:15:43.438506] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.539 [2024-07-14 20:15:43.438538] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.539 2024/07/14 20:15:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.539 [2024-07-14 20:15:43.454149] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.539 [2024-07-14 20:15:43.454181] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.539 2024/07/14 20:15:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.539 [2024-07-14 20:15:43.466417] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.539 [2024-07-14 20:15:43.466450] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.539 2024/07/14 20:15:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.539 [2024-07-14 20:15:43.481702] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.539 [2024-07-14 20:15:43.481736] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.539 2024/07/14 20:15:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.539 [2024-07-14 20:15:43.491378] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.539 [2024-07-14 20:15:43.491409] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.539 2024/07/14 20:15:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.539 [2024-07-14 20:15:43.506842] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.539 [2024-07-14 20:15:43.506885] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.539 2024/07/14 20:15:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.539 [2024-07-14 20:15:43.521977] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.539 [2024-07-14 20:15:43.522009] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.539 2024/07/14 20:15:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.539 [2024-07-14 20:15:43.532176] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.539 [2024-07-14 20:15:43.532208] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.539 2024/07/14 20:15:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.539 [2024-07-14 20:15:43.546669] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.539 [2024-07-14 20:15:43.546702] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.539 2024/07/14 20:15:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.539 [2024-07-14 20:15:43.564460] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.539 [2024-07-14 20:15:43.564493] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.539 2024/07/14 20:15:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.539 [2024-07-14 20:15:43.580135] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.539 [2024-07-14 20:15:43.580167] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.539 2024/07/14 20:15:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.539 [2024-07-14 20:15:43.591479] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.539 [2024-07-14 20:15:43.591510] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.539 2024/07/14 20:15:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.539 [2024-07-14 20:15:43.608417] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.539 [2024-07-14 20:15:43.608451] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.540 2024/07/14 20:15:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.798 [2024-07-14 20:15:43.624308] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.798 [2024-07-14 20:15:43.624342] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.798 2024/07/14 20:15:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.798 [2024-07-14 20:15:43.640029] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.798 [2024-07-14 20:15:43.640060] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.798 2024/07/14 20:15:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.798 [2024-07-14 20:15:43.657288] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.798 [2024-07-14 20:15:43.657321] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.798 2024/07/14 20:15:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.798 [2024-07-14 20:15:43.674171] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.798 [2024-07-14 20:15:43.674203] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.798 2024/07/14 20:15:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.798 [2024-07-14 20:15:43.690015] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.798 [2024-07-14 20:15:43.690049] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.798 2024/07/14 20:15:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.798 [2024-07-14 20:15:43.702170] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.798 [2024-07-14 20:15:43.702202] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.798 2024/07/14 20:15:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.798 [2024-07-14 20:15:43.717473] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.798 [2024-07-14 20:15:43.717506] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.798 2024/07/14 20:15:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.798 [2024-07-14 20:15:43.732798] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.798 [2024-07-14 20:15:43.732830] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.798 2024/07/14 20:15:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.798 [2024-07-14 20:15:43.750098] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.798 [2024-07-14 20:15:43.750131] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.798 2024/07/14 20:15:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.798 [2024-07-14 20:15:43.766078] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.798 [2024-07-14 20:15:43.766111] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.798 2024/07/14 20:15:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.798 [2024-07-14 20:15:43.782819] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.798 [2024-07-14 20:15:43.782852] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.798 2024/07/14 20:15:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.798 [2024-07-14 20:15:43.798631] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.798 [2024-07-14 20:15:43.798665] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.798 2024/07/14 20:15:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.798 [2024-07-14 20:15:43.815790] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.798 [2024-07-14 20:15:43.815834] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.798 2024/07/14 20:15:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.798 [2024-07-14 20:15:43.833026] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.798 [2024-07-14 20:15:43.833057] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.798 2024/07/14 20:15:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.798 [2024-07-14 20:15:43.848706] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.798 [2024-07-14 20:15:43.848737] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.798 2024/07/14 20:15:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.798 [2024-07-14 20:15:43.865795] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.798 [2024-07-14 20:15:43.865828] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.798 2024/07/14 20:15:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.057 [2024-07-14 20:15:43.884143] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.057 [2024-07-14 20:15:43.884175] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.057 2024/07/14 20:15:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.057 [2024-07-14 20:15:43.898407] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.057 [2024-07-14 20:15:43.898439] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.057 2024/07/14 20:15:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.057 [2024-07-14 20:15:43.915807] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.057 [2024-07-14 20:15:43.915839] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.057 2024/07/14 20:15:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.057 [2024-07-14 20:15:43.929843] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.057 [2024-07-14 20:15:43.929900] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.057 2024/07/14 20:15:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.057 [2024-07-14 20:15:43.946961] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.057 [2024-07-14 20:15:43.946994] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.057 2024/07/14 20:15:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.057 [2024-07-14 20:15:43.961911] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.057 [2024-07-14 20:15:43.961943] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.057 2024/07/14 20:15:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.057 [2024-07-14 20:15:43.977295] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.057 [2024-07-14 20:15:43.977325] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.057 2024/07/14 20:15:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.057 [2024-07-14 20:15:43.988684] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.057 [2024-07-14 20:15:43.988716] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.057 2024/07/14 20:15:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.057 [2024-07-14 20:15:44.005629] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.057 [2024-07-14 20:15:44.005661] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.057 2024/07/14 20:15:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.057 [2024-07-14 20:15:44.022000] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.057 [2024-07-14 20:15:44.022031] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.057 2024/07/14 20:15:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.057 [2024-07-14 20:15:44.039370] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.057 [2024-07-14 20:15:44.039401] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.058 2024/07/14 20:15:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.058 [2024-07-14 20:15:44.054680] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.058 [2024-07-14 20:15:44.054712] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.058 2024/07/14 20:15:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.058 [2024-07-14 20:15:44.070736] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.058 [2024-07-14 20:15:44.070768] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.058 2024/07/14 20:15:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.058 [2024-07-14 20:15:44.087726] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.058 [2024-07-14 20:15:44.087759] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.058 2024/07/14 20:15:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.058 [2024-07-14 20:15:44.104116] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.058 [2024-07-14 20:15:44.104151] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.058 2024/07/14 20:15:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.058 [2024-07-14 20:15:44.120206] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.058 [2024-07-14 20:15:44.120238] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.058 2024/07/14 20:15:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.058 [2024-07-14 20:15:44.137255] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.058 [2024-07-14 20:15:44.137298] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.058 2024/07/14 20:15:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.316 [2024-07-14 20:15:44.150508] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.316 [2024-07-14 20:15:44.150543] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.316 2024/07/14 20:15:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.317 [2024-07-14 20:15:44.164759] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.317 [2024-07-14 20:15:44.164792] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.317 2024/07/14 20:15:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.317 [2024-07-14 20:15:44.180639] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.317 [2024-07-14 20:15:44.180672] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.317 2024/07/14 20:15:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.317 [2024-07-14 20:15:44.197729] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.317 [2024-07-14 20:15:44.197763] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.317 2024/07/14 20:15:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.317 [2024-07-14 20:15:44.213930] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.317 [2024-07-14 20:15:44.213964] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.317 2024/07/14 20:15:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.317 [2024-07-14 20:15:44.229792] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.317 [2024-07-14 20:15:44.229825] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.317 2024/07/14 20:15:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.317 [2024-07-14 20:15:44.244875] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.317 [2024-07-14 20:15:44.244933] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.317 2024/07/14 20:15:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.317 [2024-07-14 20:15:44.261011] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.317 [2024-07-14 20:15:44.261042] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.317 2024/07/14 20:15:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.317 [2024-07-14 20:15:44.278539] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.317 [2024-07-14 20:15:44.278573] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.317 2024/07/14 20:15:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.317 [2024-07-14 20:15:44.293764] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.317 [2024-07-14 20:15:44.293798] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.317 2024/07/14 20:15:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.317 [2024-07-14 20:15:44.312041] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.317 [2024-07-14 20:15:44.312073] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.317 2024/07/14 20:15:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.317 [2024-07-14 20:15:44.327349] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.317 [2024-07-14 20:15:44.327381] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.317 2024/07/14 20:15:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.317 [2024-07-14 20:15:44.338985] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.317 [2024-07-14 20:15:44.339017] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.317 2024/07/14 20:15:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.317 [2024-07-14 20:15:44.354357] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.317 [2024-07-14 20:15:44.354390] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.317 2024/07/14 20:15:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.317 [2024-07-14 20:15:44.371897] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.317 [2024-07-14 20:15:44.371928] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.317 2024/07/14 20:15:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.317 [2024-07-14 20:15:44.388555] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.317 [2024-07-14 20:15:44.388587] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.317 2024/07/14 20:15:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.575 [2024-07-14 20:15:44.404950] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.575 [2024-07-14 20:15:44.404996] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.575 2024/07/14 20:15:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.575 [2024-07-14 20:15:44.421905] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.575 [2024-07-14 20:15:44.421935] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.575 2024/07/14 20:15:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.575 [2024-07-14 20:15:44.437212] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.575 [2024-07-14 20:15:44.437244] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.575 2024/07/14 20:15:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.575 [2024-07-14 20:15:44.452963] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.575 [2024-07-14 20:15:44.452994] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.575 2024/07/14 20:15:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.575 [2024-07-14 20:15:44.470278] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.575 [2024-07-14 20:15:44.470312] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.575 2024/07/14 20:15:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.575 [2024-07-14 20:15:44.487090] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.575 [2024-07-14 20:15:44.487122] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.575 2024/07/14 20:15:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.575 [2024-07-14 20:15:44.503703] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.575 [2024-07-14 20:15:44.503736] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.575 2024/07/14 20:15:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.575 [2024-07-14 20:15:44.520421] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.575 [2024-07-14 20:15:44.520453] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.576 2024/07/14 20:15:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.576 [2024-07-14 20:15:44.537367] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.576 [2024-07-14 20:15:44.537399] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.576 2024/07/14 20:15:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.576 [2024-07-14 20:15:44.553471] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.576 [2024-07-14 20:15:44.553503] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.576 2024/07/14 20:15:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.576 [2024-07-14 20:15:44.570574] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.576 [2024-07-14 20:15:44.570608] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.576 2024/07/14 20:15:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.576 [2024-07-14 20:15:44.586249] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.576 [2024-07-14 20:15:44.586281] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.576 2024/07/14 20:15:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.576 [2024-07-14 20:15:44.597919] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.576 [2024-07-14 20:15:44.597956] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.576 2024/07/14 20:15:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.576 [2024-07-14 20:15:44.614425] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.576 [2024-07-14 20:15:44.614458] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.576 2024/07/14 20:15:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.576 [2024-07-14 20:15:44.630830] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.576 [2024-07-14 20:15:44.630887] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.576 2024/07/14 20:15:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.576 [2024-07-14 20:15:44.648662] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.576 [2024-07-14 20:15:44.648694] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.576 2024/07/14 20:15:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.834 [2024-07-14 20:15:44.663817] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.834 [2024-07-14 20:15:44.663890] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.834 2024/07/14 20:15:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.834 [2024-07-14 20:15:44.682005] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.834 [2024-07-14 20:15:44.682037] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.834 2024/07/14 20:15:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.834 [2024-07-14 20:15:44.696090] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.834 [2024-07-14 20:15:44.696121] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.834 2024/07/14 20:15:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.834 [2024-07-14 20:15:44.712450] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.834 [2024-07-14 20:15:44.712482] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.834 2024/07/14 20:15:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.834 [2024-07-14 20:15:44.730102] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.834 [2024-07-14 20:15:44.730146] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.834 2024/07/14 20:15:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.834 [2024-07-14 20:15:44.743550] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.834 [2024-07-14 20:15:44.743593] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.835 2024/07/14 20:15:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.835 [2024-07-14 20:15:44.760714] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.835 [2024-07-14 20:15:44.760743] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.835 2024/07/14 20:15:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.835 [2024-07-14 20:15:44.774587] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.835 [2024-07-14 20:15:44.774630] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.835 2024/07/14 20:15:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.835 [2024-07-14 20:15:44.789344] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.835 [2024-07-14 20:15:44.789386] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.835 2024/07/14 20:15:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.835 [2024-07-14 20:15:44.801007] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.835 [2024-07-14 20:15:44.801035] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.835 2024/07/14 20:15:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.835 [2024-07-14 20:15:44.815898] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.835 [2024-07-14 20:15:44.815951] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.835 2024/07/14 20:15:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.835 [2024-07-14 20:15:44.828237] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.835 [2024-07-14 20:15:44.828281] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.835 2024/07/14 20:15:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.835 [2024-07-14 20:15:44.844713] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.835 [2024-07-14 20:15:44.844757] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.835 2024/07/14 20:15:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.835 [2024-07-14 20:15:44.861177] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.835 [2024-07-14 20:15:44.861221] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.835 2024/07/14 20:15:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.835 [2024-07-14 20:15:44.878609] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.835 [2024-07-14 20:15:44.878654] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.835 2024/07/14 20:15:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.835 [2024-07-14 20:15:44.893644] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.835 [2024-07-14 20:15:44.893688] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.835 2024/07/14 20:15:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.835 [2024-07-14 20:15:44.908932] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.835 [2024-07-14 20:15:44.908960] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.835 2024/07/14 20:15:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.093 [2024-07-14 20:15:44.920492] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.093 [2024-07-14 20:15:44.920535] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.093 2024/07/14 20:15:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.093 [2024-07-14 20:15:44.937164] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.093 [2024-07-14 20:15:44.937193] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.093 2024/07/14 20:15:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.093 [2024-07-14 20:15:44.954045] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.093 [2024-07-14 20:15:44.954088] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.093 2024/07/14 20:15:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.093 [2024-07-14 20:15:44.970820] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.093 [2024-07-14 20:15:44.970863] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.093 2024/07/14 20:15:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.093 [2024-07-14 20:15:44.988426] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.093 [2024-07-14 20:15:44.988469] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.093 2024/07/14 20:15:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.093 [2024-07-14 20:15:45.003364] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.093 [2024-07-14 20:15:45.003407] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.094 2024/07/14 20:15:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.094 [2024-07-14 20:15:45.020202] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.094 [2024-07-14 20:15:45.020245] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.094 2024/07/14 20:15:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.094 [2024-07-14 20:15:45.036346] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.094 [2024-07-14 20:15:45.036391] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.094 2024/07/14 20:15:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.094 [2024-07-14 20:15:45.053163] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.094 [2024-07-14 20:15:45.053207] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.094 2024/07/14 20:15:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.094 [2024-07-14 20:15:45.069276] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.094 [2024-07-14 20:15:45.069320] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.094 2024/07/14 20:15:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.094 [2024-07-14 20:15:45.086736] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.094 [2024-07-14 20:15:45.086779] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.094 2024/07/14 20:15:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.094 [2024-07-14 20:15:45.102400] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.094 [2024-07-14 20:15:45.102443] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.094 2024/07/14 20:15:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.094 [2024-07-14 20:15:45.119992] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.094 [2024-07-14 20:15:45.120019] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.094 2024/07/14 20:15:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.094 [2024-07-14 20:15:45.134747] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.094 [2024-07-14 20:15:45.134790] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.094 2024/07/14 20:15:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.094 [2024-07-14 20:15:45.150094] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.094 [2024-07-14 20:15:45.150122] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.094 2024/07/14 20:15:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.094 [2024-07-14 20:15:45.167480] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.094 [2024-07-14 20:15:45.167524] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.094 2024/07/14 20:15:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.352 [2024-07-14 20:15:45.184735] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.352 [2024-07-14 20:15:45.184778] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.352 2024/07/14 20:15:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.352 [2024-07-14 20:15:45.200130] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.352 [2024-07-14 20:15:45.200174] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.352 2024/07/14 20:15:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.352 [2024-07-14 20:15:45.211270] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.352 [2024-07-14 20:15:45.211315] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.352 2024/07/14 20:15:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.352 [2024-07-14 20:15:45.227500] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.352 [2024-07-14 20:15:45.227544] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.352 2024/07/14 20:15:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.352 [2024-07-14 20:15:45.244483] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.352 [2024-07-14 20:15:45.244529] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.352 2024/07/14 20:15:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.352 [2024-07-14 20:15:45.260773] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.352 [2024-07-14 20:15:45.260817] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.352 2024/07/14 20:15:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.352 [2024-07-14 20:15:45.278147] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.352 [2024-07-14 20:15:45.278190] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.352 2024/07/14 20:15:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.352 [2024-07-14 20:15:45.293531] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.352 [2024-07-14 20:15:45.293574] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.352 2024/07/14 20:15:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.352 [2024-07-14 20:15:45.311691] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.352 [2024-07-14 20:15:45.311735] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.353 2024/07/14 20:15:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.353 [2024-07-14 20:15:45.326858] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.353 [2024-07-14 20:15:45.326938] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.353 2024/07/14 20:15:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.353 [2024-07-14 20:15:45.341464] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.353 [2024-07-14 20:15:45.341508] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.353 2024/07/14 20:15:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.353 [2024-07-14 20:15:45.357927] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.353 [2024-07-14 20:15:45.357971] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.353 2024/07/14 20:15:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.353 [2024-07-14 20:15:45.375641] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.353 [2024-07-14 20:15:45.375685] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.353 2024/07/14 20:15:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.353 [2024-07-14 20:15:45.390138] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.353 [2024-07-14 20:15:45.390168] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.353 2024/07/14 20:15:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.353 [2024-07-14 20:15:45.406061] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.353 [2024-07-14 20:15:45.406104] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.353 2024/07/14 20:15:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.353 [2024-07-14 20:15:45.422766] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.353 [2024-07-14 20:15:45.422810] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.353 2024/07/14 20:15:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.613 [2024-07-14 20:15:45.439872] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.613 [2024-07-14 20:15:45.439911] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.613 2024/07/14 20:15:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.613 [2024-07-14 20:15:45.456805] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.613 [2024-07-14 20:15:45.456848] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.613 2024/07/14 20:15:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.613 [2024-07-14 20:15:45.471984] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.613 [2024-07-14 20:15:45.472013] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.613 2024/07/14 20:15:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.613 [2024-07-14 20:15:45.487390] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.613 [2024-07-14 20:15:45.487432] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.613 2024/07/14 20:15:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.613 [2024-07-14 20:15:45.504588] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.613 [2024-07-14 20:15:45.504632] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.613 2024/07/14 20:15:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.613 [2024-07-14 20:15:45.521851] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.613 [2024-07-14 20:15:45.521904] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.613 2024/07/14 20:15:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.613 [2024-07-14 20:15:45.538296] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.613 [2024-07-14 20:15:45.538339] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.613 2024/07/14 20:15:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.613 [2024-07-14 20:15:45.555529] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.613 [2024-07-14 20:15:45.555574] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.613 2024/07/14 20:15:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.613 [2024-07-14 20:15:45.570711] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.613 [2024-07-14 20:15:45.570754] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.613 2024/07/14 20:15:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.613 [2024-07-14 20:15:45.585925] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.613 [2024-07-14 20:15:45.585950] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.613 2024/07/14 20:15:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.613 [2024-07-14 20:15:45.597556] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.613 [2024-07-14 20:15:45.597612] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.613 2024/07/14 20:15:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.613 [2024-07-14 20:15:45.614003] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.613 [2024-07-14 20:15:45.614046] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.613 2024/07/14 20:15:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.613 [2024-07-14 20:15:45.630343] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.613 [2024-07-14 20:15:45.630386] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.613 2024/07/14 20:15:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.613 [2024-07-14 20:15:45.648090] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.613 [2024-07-14 20:15:45.648133] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.613 2024/07/14 20:15:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.613 [2024-07-14 20:15:45.661896] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.613 [2024-07-14 20:15:45.661921] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.613 2024/07/14 20:15:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.613 [2024-07-14 20:15:45.679296] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.613 [2024-07-14 20:15:45.679340] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.613 2024/07/14 20:15:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.613 [2024-07-14 20:15:45.694187] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.613 [2024-07-14 20:15:45.694232] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.613 2024/07/14 20:15:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.873 [2024-07-14 20:15:45.704126] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.873 [2024-07-14 20:15:45.704154] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.873 2024/07/14 20:15:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.873 [2024-07-14 20:15:45.719073] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.873 [2024-07-14 20:15:45.719105] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.873 2024/07/14 20:15:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.873 [2024-07-14 20:15:45.737285] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.873 [2024-07-14 20:15:45.737328] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.873 2024/07/14 20:15:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.873 [2024-07-14 20:15:45.752165] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.873 [2024-07-14 20:15:45.752208] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.873 2024/07/14 20:15:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.873 [2024-07-14 20:15:45.760742] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.873 [2024-07-14 20:15:45.760784] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.873 2024/07/14 20:15:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.873 [2024-07-14 20:15:45.775814] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.873 [2024-07-14 20:15:45.775863] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.873 2024/07/14 20:15:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.873 [2024-07-14 20:15:45.791879] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.873 [2024-07-14 20:15:45.791934] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.873 2024/07/14 20:15:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.873 [2024-07-14 20:15:45.809243] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.873 [2024-07-14 20:15:45.809287] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.873 2024/07/14 20:15:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.873 [2024-07-14 20:15:45.823738] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.873 [2024-07-14 20:15:45.823781] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.873 2024/07/14 20:15:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.873 [2024-07-14 20:15:45.840843] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.873 [2024-07-14 20:15:45.840896] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.873 2024/07/14 20:15:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.873 [2024-07-14 20:15:45.856944] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.873 [2024-07-14 20:15:45.856987] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.873 2024/07/14 20:15:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.873 [2024-07-14 20:15:45.873636] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.873 [2024-07-14 20:15:45.873680] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.873 2024/07/14 20:15:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.873 [2024-07-14 20:15:45.891207] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.873 [2024-07-14 20:15:45.891250] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.873 2024/07/14 20:15:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.873 [2024-07-14 20:15:45.906133] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.873 [2024-07-14 20:15:45.906163] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.873 2024/07/14 20:15:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.873 [2024-07-14 20:15:45.921898] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.873 [2024-07-14 20:15:45.921941] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.873 2024/07/14 20:15:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.873 [2024-07-14 20:15:45.939273] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.873 [2024-07-14 20:15:45.939318] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.873 2024/07/14 20:15:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:56.873 [2024-07-14 20:15:45.956341] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.873 [2024-07-14 20:15:45.956384] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.132 2024/07/14 20:15:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.132 [2024-07-14 20:15:45.971418] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.132 [2024-07-14 20:15:45.971463] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.132 2024/07/14 20:15:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.132 [2024-07-14 20:15:45.987425] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.132 [2024-07-14 20:15:45.987468] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.132 2024/07/14 20:15:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.132 [2024-07-14 20:15:46.004101] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.132 [2024-07-14 20:15:46.004144] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.132 2024/07/14 20:15:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.132 [2024-07-14 20:15:46.021405] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.132 [2024-07-14 20:15:46.021448] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.132 2024/07/14 20:15:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.132 [2024-07-14 20:15:46.038671] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.132 [2024-07-14 20:15:46.038714] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.132 2024/07/14 20:15:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.132 [2024-07-14 20:15:46.054727] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.132 [2024-07-14 20:15:46.054772] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.132 2024/07/14 20:15:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.132 [2024-07-14 20:15:46.071525] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.132 [2024-07-14 20:15:46.071568] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.132 2024/07/14 20:15:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.132 [2024-07-14 20:15:46.089197] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.132 [2024-07-14 20:15:46.089241] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.132 2024/07/14 20:15:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.132 [2024-07-14 20:15:46.104157] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.132 [2024-07-14 20:15:46.104201] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.132 2024/07/14 20:15:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.132 [2024-07-14 20:15:46.112973] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.132 [2024-07-14 20:15:46.112999] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.132 2024/07/14 20:15:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.132 [2024-07-14 20:15:46.127192] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.132 [2024-07-14 20:15:46.127236] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.132 2024/07/14 20:15:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.132 [2024-07-14 20:15:46.143544] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.132 [2024-07-14 20:15:46.143575] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.132 2024/07/14 20:15:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.132 [2024-07-14 20:15:46.159211] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.132 [2024-07-14 20:15:46.159241] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.132 2024/07/14 20:15:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.132 [2024-07-14 20:15:46.176262] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.132 [2024-07-14 20:15:46.176306] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.132 2024/07/14 20:15:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.132 [2024-07-14 20:15:46.192583] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.132 [2024-07-14 20:15:46.192611] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.133 2024/07/14 20:15:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.133 [2024-07-14 20:15:46.209232] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.133 [2024-07-14 20:15:46.209275] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.133 2024/07/14 20:15:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.393 [2024-07-14 20:15:46.225454] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.393 [2024-07-14 20:15:46.225497] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.393 2024/07/14 20:15:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.393 [2024-07-14 20:15:46.237630] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.393 [2024-07-14 20:15:46.237660] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.393 2024/07/14 20:15:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.393 [2024-07-14 20:15:46.253681] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.393 [2024-07-14 20:15:46.253724] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.393 2024/07/14 20:15:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.393 [2024-07-14 20:15:46.270865] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.393 [2024-07-14 20:15:46.270954] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.393 2024/07/14 20:15:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.393 [2024-07-14 20:15:46.286536] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.393 [2024-07-14 20:15:46.286580] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.393 2024/07/14 20:15:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.393 [2024-07-14 20:15:46.303028] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.393 [2024-07-14 20:15:46.303057] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.394 2024/07/14 20:15:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.394 [2024-07-14 20:15:46.320317] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.394 [2024-07-14 20:15:46.320360] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.394 2024/07/14 20:15:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.394 [2024-07-14 20:15:46.335895] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.394 [2024-07-14 20:15:46.335929] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.394 2024/07/14 20:15:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.394 [2024-07-14 20:15:46.352764] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.394 [2024-07-14 20:15:46.352807] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.394 2024/07/14 20:15:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.394 [2024-07-14 20:15:46.370143] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.394 [2024-07-14 20:15:46.370186] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.394 2024/07/14 20:15:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.394 [2024-07-14 20:15:46.385445] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.394 [2024-07-14 20:15:46.385490] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.394 2024/07/14 20:15:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.394 [2024-07-14 20:15:46.402981] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.394 [2024-07-14 20:15:46.403009] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.394 2024/07/14 20:15:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.394 [2024-07-14 20:15:46.419281] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.394 [2024-07-14 20:15:46.419327] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.394 2024/07/14 20:15:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.394 [2024-07-14 20:15:46.435859] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.394 [2024-07-14 20:15:46.435912] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.394 2024/07/14 20:15:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.394 [2024-07-14 20:15:46.452753] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.394 [2024-07-14 20:15:46.452796] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.394 2024/07/14 20:15:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.394 [2024-07-14 20:15:46.469507] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.394 [2024-07-14 20:15:46.469550] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.394 2024/07/14 20:15:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.653 [2024-07-14 20:15:46.484808] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.653 [2024-07-14 20:15:46.484850] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.653 2024/07/14 20:15:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.653 [2024-07-14 20:15:46.501723] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.653 [2024-07-14 20:15:46.501765] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.653 2024/07/14 20:15:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.653 [2024-07-14 20:15:46.518178] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.653 [2024-07-14 20:15:46.518220] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.653 2024/07/14 20:15:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.653 [2024-07-14 20:15:46.535610] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.653 [2024-07-14 20:15:46.535653] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.653 2024/07/14 20:15:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.653 [2024-07-14 20:15:46.551066] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.653 [2024-07-14 20:15:46.551096] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.653 2024/07/14 20:15:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.653 [2024-07-14 20:15:46.568067] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.653 [2024-07-14 20:15:46.568094] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.653 2024/07/14 20:15:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.653 [2024-07-14 20:15:46.584707] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.653 [2024-07-14 20:15:46.584751] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.653 2024/07/14 20:15:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.653 [2024-07-14 20:15:46.602285] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.653 [2024-07-14 20:15:46.602329] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.653 2024/07/14 20:15:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.653 [2024-07-14 20:15:46.618195] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.653 [2024-07-14 20:15:46.618238] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.653 2024/07/14 20:15:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.653 [2024-07-14 20:15:46.635031] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.653 [2024-07-14 20:15:46.635060] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.653 2024/07/14 20:15:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.653 [2024-07-14 20:15:46.652153] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.653 [2024-07-14 20:15:46.652196] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.653 2024/07/14 20:15:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.653 [2024-07-14 20:15:46.667493] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.653 [2024-07-14 20:15:46.667535] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.653 2024/07/14 20:15:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.653 [2024-07-14 20:15:46.678989] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.653 [2024-07-14 20:15:46.679017] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.653 2024/07/14 20:15:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.653 [2024-07-14 20:15:46.695525] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.653 [2024-07-14 20:15:46.695568] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.653 2024/07/14 20:15:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.653 [2024-07-14 20:15:46.709771] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.653 [2024-07-14 20:15:46.709815] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.653 2024/07/14 20:15:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.653 [2024-07-14 20:15:46.725951] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.653 [2024-07-14 20:15:46.725993] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.653 2024/07/14 20:15:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.912 [2024-07-14 20:15:46.742459] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.912 [2024-07-14 20:15:46.742502] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.912 2024/07/14 20:15:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.912 [2024-07-14 20:15:46.758982] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.912 [2024-07-14 20:15:46.759011] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.912 2024/07/14 20:15:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.912 [2024-07-14 20:15:46.776488] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.912 [2024-07-14 20:15:46.776531] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.912 2024/07/14 20:15:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.912 [2024-07-14 20:15:46.792256] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.912 [2024-07-14 20:15:46.792300] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.912 2024/07/14 20:15:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.912 [2024-07-14 20:15:46.809648] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.912 [2024-07-14 20:15:46.809693] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.912 2024/07/14 20:15:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.912 [2024-07-14 20:15:46.825546] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.912 [2024-07-14 20:15:46.825589] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.912 2024/07/14 20:15:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.912 [2024-07-14 20:15:46.842454] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.912 [2024-07-14 20:15:46.842480] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.912 2024/07/14 20:15:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.912 [2024-07-14 20:15:46.860171] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.912 [2024-07-14 20:15:46.860215] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.912 2024/07/14 20:15:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.912 [2024-07-14 20:15:46.875958] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.912 [2024-07-14 20:15:46.875985] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.912 2024/07/14 20:15:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.912 [2024-07-14 20:15:46.893108] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.912 [2024-07-14 20:15:46.893151] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.912 2024/07/14 20:15:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.912 [2024-07-14 20:15:46.909913] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.912 [2024-07-14 20:15:46.909957] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.912 2024/07/14 20:15:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.912 [2024-07-14 20:15:46.926148] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.912 [2024-07-14 20:15:46.926191] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.912 2024/07/14 20:15:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.912 [2024-07-14 20:15:46.943333] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.912 [2024-07-14 20:15:46.943369] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.913 2024/07/14 20:15:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.913 [2024-07-14 20:15:46.960813] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.913 [2024-07-14 20:15:46.960857] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.913 2024/07/14 20:15:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.913 [2024-07-14 20:15:46.975722] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.913 [2024-07-14 20:15:46.975766] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.913 2024/07/14 20:15:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.913 [2024-07-14 20:15:46.991995] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.913 [2024-07-14 20:15:46.992024] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.913 2024/07/14 20:15:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.172 [2024-07-14 20:15:47.005999] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.172 [2024-07-14 20:15:47.006027] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.172 2024/07/14 20:15:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.172 [2024-07-14 20:15:47.021925] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.172 [2024-07-14 20:15:47.021968] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.172 2024/07/14 20:15:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.172 [2024-07-14 20:15:47.039672] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.172 [2024-07-14 20:15:47.039716] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.172 2024/07/14 20:15:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.172 [2024-07-14 20:15:47.055394] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.172 [2024-07-14 20:15:47.055438] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.172 2024/07/14 20:15:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.172 [2024-07-14 20:15:47.072166] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.172 [2024-07-14 20:15:47.072196] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.172 2024/07/14 20:15:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.172 [2024-07-14 20:15:47.087847] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.172 [2024-07-14 20:15:47.087900] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.172 2024/07/14 20:15:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.172 [2024-07-14 20:15:47.105459] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.172 [2024-07-14 20:15:47.105502] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.172 2024/07/14 20:15:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.172 [2024-07-14 20:15:47.121557] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.172 [2024-07-14 20:15:47.121602] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.172 2024/07/14 20:15:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.172 [2024-07-14 20:15:47.138944] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.172 [2024-07-14 20:15:47.138974] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.172 2024/07/14 20:15:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.172 [2024-07-14 20:15:47.154276] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.172 [2024-07-14 20:15:47.154319] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.172 2024/07/14 20:15:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.172 [2024-07-14 20:15:47.165581] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.172 [2024-07-14 20:15:47.165624] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.172 2024/07/14 20:15:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.172 [2024-07-14 20:15:47.181814] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.172 [2024-07-14 20:15:47.181857] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.172 2024/07/14 20:15:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.172 [2024-07-14 20:15:47.198357] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.172 [2024-07-14 20:15:47.198400] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.172 2024/07/14 20:15:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.172 [2024-07-14 20:15:47.215934] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.172 [2024-07-14 20:15:47.215963] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.172 2024/07/14 20:15:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.172 [2024-07-14 20:15:47.230617] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.172 [2024-07-14 20:15:47.230660] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.172 2024/07/14 20:15:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.172 [2024-07-14 20:15:47.241455] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.172 [2024-07-14 20:15:47.241498] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.172 2024/07/14 20:15:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.431 [2024-07-14 20:15:47.257923] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.431 [2024-07-14 20:15:47.257946] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.431 2024/07/14 20:15:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.431 [2024-07-14 20:15:47.274809] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.431 [2024-07-14 20:15:47.274853] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.431 2024/07/14 20:15:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.431 [2024-07-14 20:15:47.291359] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.431 [2024-07-14 20:15:47.291402] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.431 2024/07/14 20:15:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.431 [2024-07-14 20:15:47.308844] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.431 [2024-07-14 20:15:47.308897] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.431 2024/07/14 20:15:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.431 [2024-07-14 20:15:47.323529] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.431 [2024-07-14 20:15:47.323573] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.431 2024/07/14 20:15:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.431 [2024-07-14 20:15:47.340226] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.431 [2024-07-14 20:15:47.340269] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.431 2024/07/14 20:15:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.431 [2024-07-14 20:15:47.355326] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.431 [2024-07-14 20:15:47.355386] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.431 2024/07/14 20:15:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.431 [2024-07-14 20:15:47.371025] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.431 [2024-07-14 20:15:47.371054] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.431 2024/07/14 20:15:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.431 [2024-07-14 20:15:47.388236] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.431 [2024-07-14 20:15:47.388280] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.431 2024/07/14 20:15:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.431 [2024-07-14 20:15:47.402959] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.431 [2024-07-14 20:15:47.402987] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.431 2024/07/14 20:15:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.431 [2024-07-14 20:15:47.418519] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.431 [2024-07-14 20:15:47.418563] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.431 2024/07/14 20:15:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.431 [2024-07-14 20:15:47.435894] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.431 [2024-07-14 20:15:47.435947] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.431 2024/07/14 20:15:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.431 [2024-07-14 20:15:47.451178] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.431 [2024-07-14 20:15:47.451207] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.431 2024/07/14 20:15:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.431 [2024-07-14 20:15:47.466620] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.431 [2024-07-14 20:15:47.466663] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.431 2024/07/14 20:15:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.431 [2024-07-14 20:15:47.483999] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.431 [2024-07-14 20:15:47.484041] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.432 2024/07/14 20:15:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.432 [2024-07-14 20:15:47.501725] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.432 [2024-07-14 20:15:47.501769] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.432 2024/07/14 20:15:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.690 [2024-07-14 20:15:47.518344] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.690 [2024-07-14 20:15:47.518372] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.690 2024/07/14 20:15:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.690 [2024-07-14 20:15:47.534985] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.690 [2024-07-14 20:15:47.535017] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.690 2024/07/14 20:15:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.690 [2024-07-14 20:15:47.552592] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.690 [2024-07-14 20:15:47.552636] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.690 2024/07/14 20:15:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.690 [2024-07-14 20:15:47.566391] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.690 [2024-07-14 20:15:47.566435] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.690 2024/07/14 20:15:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.691 [2024-07-14 20:15:47.583007] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.691 [2024-07-14 20:15:47.583035] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.691 2024/07/14 20:15:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.691 [2024-07-14 20:15:47.599002] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.691 [2024-07-14 20:15:47.599030] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.691 2024/07/14 20:15:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.691 [2024-07-14 20:15:47.616797] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.691 [2024-07-14 20:15:47.616842] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.691 2024/07/14 20:15:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.691 [2024-07-14 20:15:47.632455] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.691 [2024-07-14 20:15:47.632498] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.691 2024/07/14 20:15:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.691 [2024-07-14 20:15:47.650693] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.691 [2024-07-14 20:15:47.650737] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.691 2024/07/14 20:15:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.691 [2024-07-14 20:15:47.666110] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.691 [2024-07-14 20:15:47.666153] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.691 2024/07/14 20:15:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.691 [2024-07-14 20:15:47.677049] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.691 [2024-07-14 20:15:47.677079] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.691 2024/07/14 20:15:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.691 [2024-07-14 20:15:47.693512] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.691 [2024-07-14 20:15:47.693555] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.691 2024/07/14 20:15:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.691 [2024-07-14 20:15:47.711424] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.691 [2024-07-14 20:15:47.711468] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.691 2024/07/14 20:15:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.691 [2024-07-14 20:15:47.725945] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.691 [2024-07-14 20:15:47.725970] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.691 2024/07/14 20:15:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.691 [2024-07-14 20:15:47.741374] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.691 [2024-07-14 20:15:47.741419] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.691 2024/07/14 20:15:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.691 [2024-07-14 20:15:47.758679] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.691 [2024-07-14 20:15:47.758723] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.691 2024/07/14 20:15:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.691 [2024-07-14 20:15:47.774824] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.691 [2024-07-14 20:15:47.774864] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.949 2024/07/14 20:15:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.949 [2024-07-14 20:15:47.791182] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.949 [2024-07-14 20:15:47.791213] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.949 2024/07/14 20:15:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.949 [2024-07-14 20:15:47.808053] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.949 [2024-07-14 20:15:47.808096] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.949 2024/07/14 20:15:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.949 [2024-07-14 20:15:47.823880] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.949 [2024-07-14 20:15:47.823936] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.949 2024/07/14 20:15:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.949 [2024-07-14 20:15:47.841023] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.949 [2024-07-14 20:15:47.841081] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.949 2024/07/14 20:15:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.949 [2024-07-14 20:15:47.857726] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.949 [2024-07-14 20:15:47.857769] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.949 2024/07/14 20:15:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.949 [2024-07-14 20:15:47.873855] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.949 [2024-07-14 20:15:47.873908] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.949 2024/07/14 20:15:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.949 [2024-07-14 20:15:47.890888] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.949 [2024-07-14 20:15:47.890955] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.950 2024/07/14 20:15:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.950 [2024-07-14 20:15:47.907464] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.950 [2024-07-14 20:15:47.907508] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.950 2024/07/14 20:15:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.950 [2024-07-14 20:15:47.924485] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.950 [2024-07-14 20:15:47.924528] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.950 2024/07/14 20:15:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.950 [2024-07-14 20:15:47.942237] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.950 [2024-07-14 20:15:47.942280] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.950 2024/07/14 20:15:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.950 [2024-07-14 20:15:47.957711] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.950 [2024-07-14 20:15:47.957755] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.950 2024/07/14 20:15:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.950 [2024-07-14 20:15:47.974992] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.950 [2024-07-14 20:15:47.975020] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.950 2024/07/14 20:15:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.950 [2024-07-14 20:15:47.991722] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.950 [2024-07-14 20:15:47.991765] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.950 2024/07/14 20:15:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.950 [2024-07-14 20:15:48.009264] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.950 [2024-07-14 20:15:48.009308] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.950 2024/07/14 20:15:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.950 [2024-07-14 20:15:48.024748] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.950 [2024-07-14 20:15:48.024790] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.950 2024/07/14 20:15:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.209 [2024-07-14 20:15:48.041707] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.209 [2024-07-14 20:15:48.041750] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.209 2024/07/14 20:15:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.209 [2024-07-14 20:15:48.057527] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.209 [2024-07-14 20:15:48.057571] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.209 2024/07/14 20:15:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.209 [2024-07-14 20:15:48.074809] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.209 [2024-07-14 20:15:48.074853] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.209 2024/07/14 20:15:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.209 [2024-07-14 20:15:48.092304] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.209 [2024-07-14 20:15:48.092347] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.209 2024/07/14 20:15:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.209 [2024-07-14 20:15:48.109304] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.209 [2024-07-14 20:15:48.109347] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.209 2024/07/14 20:15:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.209 [2024-07-14 20:15:48.125781] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.209 [2024-07-14 20:15:48.125824] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.209 2024/07/14 20:15:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.209 [2024-07-14 20:15:48.142239] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.209 [2024-07-14 20:15:48.142282] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.209 2024/07/14 20:15:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.209 [2024-07-14 20:15:48.160027] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.209 [2024-07-14 20:15:48.160057] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.209 2024/07/14 20:15:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.209 [2024-07-14 20:15:48.175179] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.209 [2024-07-14 20:15:48.175222] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.209 2024/07/14 20:15:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.209 [2024-07-14 20:15:48.192900] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.209 [2024-07-14 20:15:48.192943] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.209 2024/07/14 20:15:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.209 [2024-07-14 20:15:48.208948] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.209 [2024-07-14 20:15:48.208991] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.209 2024/07/14 20:15:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.209 [2024-07-14 20:15:48.226586] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.209 [2024-07-14 20:15:48.226611] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.209 2024/07/14 20:15:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.209 [2024-07-14 20:15:48.241578] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.209 [2024-07-14 20:15:48.241606] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.209 2024/07/14 20:15:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.209 00:15:59.209 Latency(us) 00:15:59.209 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:59.209 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:15:59.209 Nvme1n1 : 5.01 13098.81 102.33 0.00 0.00 9760.83 4200.26 19899.11 00:15:59.209 =================================================================================================================== 00:15:59.209 Total : 13098.81 102.33 0.00 0.00 9760.83 4200.26 19899.11 00:15:59.209 [2024-07-14 20:15:48.252708] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.209 [2024-07-14 20:15:48.252749] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.209 2024/07/14 20:15:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.209 [2024-07-14 20:15:48.264710] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.209 [2024-07-14 20:15:48.264750] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.209 2024/07/14 20:15:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.209 [2024-07-14 20:15:48.276710] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.209 [2024-07-14 20:15:48.276751] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.209 2024/07/14 20:15:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.209 [2024-07-14 20:15:48.288724] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.209 [2024-07-14 20:15:48.288767] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.209 2024/07/14 20:15:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.468 [2024-07-14 20:15:48.300719] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.468 [2024-07-14 20:15:48.300764] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.468 2024/07/14 20:15:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.468 [2024-07-14 20:15:48.312724] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.468 [2024-07-14 20:15:48.312769] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.468 2024/07/14 20:15:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.468 [2024-07-14 20:15:48.324730] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.468 [2024-07-14 20:15:48.324759] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.468 2024/07/14 20:15:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.468 [2024-07-14 20:15:48.336734] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.468 [2024-07-14 20:15:48.336777] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.468 2024/07/14 20:15:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.468 [2024-07-14 20:15:48.348738] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.468 [2024-07-14 20:15:48.348783] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.468 2024/07/14 20:15:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.468 [2024-07-14 20:15:48.360745] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.468 [2024-07-14 20:15:48.360777] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.468 2024/07/14 20:15:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.468 [2024-07-14 20:15:48.372745] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.468 [2024-07-14 20:15:48.372789] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.468 2024/07/14 20:15:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.468 [2024-07-14 20:15:48.384750] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.468 [2024-07-14 20:15:48.384781] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.468 2024/07/14 20:15:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.468 [2024-07-14 20:15:48.396750] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.469 [2024-07-14 20:15:48.396793] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.469 2024/07/14 20:15:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.469 [2024-07-14 20:15:48.408752] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.469 [2024-07-14 20:15:48.408781] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.469 2024/07/14 20:15:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.469 [2024-07-14 20:15:48.420751] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.469 [2024-07-14 20:15:48.420794] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.469 2024/07/14 20:15:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.469 [2024-07-14 20:15:48.432757] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.469 [2024-07-14 20:15:48.432787] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.469 2024/07/14 20:15:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.469 [2024-07-14 20:15:48.444754] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.469 [2024-07-14 20:15:48.444793] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.469 2024/07/14 20:15:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.469 [2024-07-14 20:15:48.456747] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.469 [2024-07-14 20:15:48.456768] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.469 2024/07/14 20:15:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.469 [2024-07-14 20:15:48.468776] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.469 [2024-07-14 20:15:48.468824] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.469 2024/07/14 20:15:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.469 [2024-07-14 20:15:48.480769] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.469 [2024-07-14 20:15:48.480816] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.469 2024/07/14 20:15:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.469 [2024-07-14 20:15:48.492761] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.469 [2024-07-14 20:15:48.492784] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.469 2024/07/14 20:15:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.469 [2024-07-14 20:15:48.504760] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.469 [2024-07-14 20:15:48.504781] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.469 2024/07/14 20:15:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.469 [2024-07-14 20:15:48.516798] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.469 [2024-07-14 20:15:48.516848] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.469 2024/07/14 20:15:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.469 [2024-07-14 20:15:48.528770] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.469 [2024-07-14 20:15:48.528810] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.469 2024/07/14 20:15:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.469 [2024-07-14 20:15:48.540764] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.469 [2024-07-14 20:15:48.540783] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.469 2024/07/14 20:15:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:59.469 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (91912) - No such process 00:15:59.469 20:15:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 91912 00:15:59.469 20:15:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:59.469 20:15:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:59.469 20:15:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:59.727 20:15:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:59.727 20:15:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:59.727 20:15:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:59.727 20:15:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:59.727 delay0 00:15:59.727 20:15:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:59.727 20:15:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:15:59.727 20:15:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:59.727 20:15:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:59.727 20:15:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:59.727 20:15:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:15:59.727 [2024-07-14 20:15:48.730773] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:16:07.841 Initializing NVMe Controllers 00:16:07.841 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:07.841 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:07.841 Initialization complete. Launching workers. 00:16:07.841 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 271, failed: 20315 00:16:07.841 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 20497, failed to submit 89 00:16:07.841 success 20386, unsuccess 111, failed 0 00:16:07.841 20:15:55 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:16:07.841 20:15:55 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:16:07.841 20:15:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:07.841 20:15:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:16:07.841 20:15:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:07.841 20:15:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:16:07.841 20:15:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:07.841 20:15:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:07.841 rmmod nvme_tcp 00:16:07.841 rmmod nvme_fabrics 00:16:07.841 rmmod nvme_keyring 00:16:07.841 20:15:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:07.841 20:15:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:16:07.841 20:15:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:16:07.841 20:15:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 91743 ']' 00:16:07.841 20:15:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 91743 00:16:07.841 20:15:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@946 -- # '[' -z 91743 ']' 00:16:07.841 20:15:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@950 -- # kill -0 91743 00:16:07.841 20:15:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # uname 00:16:07.841 20:15:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:07.841 20:15:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 91743 00:16:07.841 20:15:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:16:07.841 20:15:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:16:07.841 killing process with pid 91743 00:16:07.841 20:15:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@964 -- # echo 'killing process with pid 91743' 00:16:07.841 20:15:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@965 -- # kill 91743 00:16:07.841 20:15:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@970 -- # wait 91743 00:16:07.841 20:15:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:07.841 20:15:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:07.841 20:15:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:07.841 20:15:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:07.841 20:15:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:07.841 20:15:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:07.841 20:15:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:07.841 20:15:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:07.841 20:15:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:07.841 00:16:07.841 real 0m26.000s 00:16:07.841 user 0m41.100s 00:16:07.841 sys 0m7.853s 00:16:07.841 20:15:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:07.841 20:15:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:07.841 ************************************ 00:16:07.841 END TEST nvmf_zcopy 00:16:07.841 ************************************ 00:16:07.841 20:15:56 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:07.841 20:15:56 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:07.841 20:15:56 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:07.841 20:15:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:07.841 ************************************ 00:16:07.841 START TEST nvmf_nmic 00:16:07.841 ************************************ 00:16:07.841 20:15:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:07.841 * Looking for test storage... 00:16:07.841 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:07.841 20:15:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:07.841 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:16:07.841 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:07.841 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:07.841 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:07.841 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:07.841 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:07.841 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:07.841 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:07.841 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:07.841 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:07.841 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:07.841 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:16:07.841 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:16:07.841 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:07.841 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:07.841 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:07.841 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:07.841 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:07.841 20:15:56 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:07.841 20:15:56 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:07.841 20:15:56 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:07.842 20:15:56 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.842 20:15:56 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.842 20:15:56 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.842 20:15:56 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:16:07.842 20:15:56 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.842 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:16:07.842 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:07.842 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:07.842 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:07.842 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:07.842 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:07.842 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:07.842 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:07.842 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:07.842 20:15:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:07.842 20:15:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:07.842 20:15:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:16:07.842 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:07.842 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:07.842 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:07.842 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:07.842 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:07.842 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:07.842 20:15:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:07.842 20:15:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:07.842 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:07.842 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:07.842 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:07.842 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:07.842 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:07.842 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:07.842 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:07.842 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:07.842 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:07.842 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:07.842 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:07.842 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:07.842 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:07.842 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:07.842 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:07.842 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:07.842 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:07.842 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:07.842 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:07.842 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:07.842 Cannot find device "nvmf_tgt_br" 00:16:07.842 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # true 00:16:07.842 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:07.842 Cannot find device "nvmf_tgt_br2" 00:16:07.842 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # true 00:16:07.842 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:07.842 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:07.842 Cannot find device "nvmf_tgt_br" 00:16:07.842 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # true 00:16:07.842 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:07.842 Cannot find device "nvmf_tgt_br2" 00:16:07.842 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # true 00:16:07.842 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:07.842 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:07.842 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:07.842 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:07.842 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:16:07.842 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:07.842 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:07.842 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:16:07.842 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:07.842 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:07.842 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:07.842 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:07.842 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:07.842 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:07.842 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:07.842 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:07.842 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:07.842 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:07.842 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:07.842 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:07.842 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:07.842 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:07.842 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:07.842 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:07.842 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:07.842 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:07.842 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:07.842 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:07.842 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:07.842 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:07.842 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:07.842 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:07.842 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:07.842 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:16:07.842 00:16:07.842 --- 10.0.0.2 ping statistics --- 00:16:07.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:07.842 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:16:07.842 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:07.842 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:07.842 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:16:07.842 00:16:07.842 --- 10.0.0.3 ping statistics --- 00:16:07.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:07.842 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:16:07.842 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:07.842 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:07.842 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:16:07.842 00:16:07.842 --- 10.0.0.1 ping statistics --- 00:16:07.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:07.842 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:16:07.842 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:07.842 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@433 -- # return 0 00:16:07.842 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:07.842 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:07.842 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:07.842 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:07.842 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:07.842 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:07.842 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:07.842 20:15:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:16:07.842 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:07.842 20:15:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:07.842 20:15:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:07.842 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=92248 00:16:07.843 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:07.843 20:15:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 92248 00:16:07.843 20:15:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@827 -- # '[' -z 92248 ']' 00:16:07.843 20:15:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:07.843 20:15:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:07.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:07.843 20:15:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:07.843 20:15:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:07.843 20:15:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:07.843 [2024-07-14 20:15:56.804600] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:16:07.843 [2024-07-14 20:15:56.804695] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:08.101 [2024-07-14 20:15:56.945762] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:08.101 [2024-07-14 20:15:57.041426] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:08.101 [2024-07-14 20:15:57.041709] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:08.101 [2024-07-14 20:15:57.041781] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:08.101 [2024-07-14 20:15:57.041852] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:08.101 [2024-07-14 20:15:57.041968] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:08.101 [2024-07-14 20:15:57.042109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:08.101 [2024-07-14 20:15:57.042245] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:08.101 [2024-07-14 20:15:57.043171] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:08.101 [2024-07-14 20:15:57.043180] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:08.668 20:15:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:08.668 20:15:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@860 -- # return 0 00:16:08.668 20:15:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:08.668 20:15:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:08.668 20:15:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:08.927 20:15:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:08.927 20:15:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:08.927 20:15:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.927 20:15:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:08.927 [2024-07-14 20:15:57.780486] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:08.927 20:15:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.927 20:15:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:08.927 20:15:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.927 20:15:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:08.927 Malloc0 00:16:08.927 20:15:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.927 20:15:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:08.927 20:15:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.927 20:15:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:08.927 20:15:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.927 20:15:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:08.927 20:15:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.927 20:15:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:08.927 20:15:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.927 20:15:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:08.927 20:15:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.927 20:15:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:08.927 [2024-07-14 20:15:57.865104] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:08.927 20:15:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.927 test case1: single bdev can't be used in multiple subsystems 00:16:08.927 20:15:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:16:08.927 20:15:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:16:08.927 20:15:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.927 20:15:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:08.927 20:15:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.927 20:15:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:16:08.927 20:15:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.927 20:15:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:08.927 20:15:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.927 20:15:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:16:08.927 20:15:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:16:08.927 20:15:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.927 20:15:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:08.927 [2024-07-14 20:15:57.888912] bdev.c:8035:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:16:08.927 [2024-07-14 20:15:57.889091] subsystem.c:2063:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:16:08.927 [2024-07-14 20:15:57.889182] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.927 2024/07/14 20:15:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0 no_auto_visible:%!s(bool=false)] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.927 request: 00:16:08.927 { 00:16:08.927 "method": "nvmf_subsystem_add_ns", 00:16:08.927 "params": { 00:16:08.927 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:16:08.927 "namespace": { 00:16:08.927 "bdev_name": "Malloc0", 00:16:08.927 "no_auto_visible": false 00:16:08.927 } 00:16:08.927 } 00:16:08.927 } 00:16:08.928 Got JSON-RPC error response 00:16:08.928 GoRPCClient: error on JSON-RPC call 00:16:08.928 20:15:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:08.928 20:15:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:16:08.928 20:15:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:16:08.928 20:15:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:16:08.928 Adding namespace failed - expected result. 00:16:08.928 test case2: host connect to nvmf target in multiple paths 00:16:08.928 20:15:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:16:08.928 20:15:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:08.928 20:15:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.928 20:15:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:08.928 [2024-07-14 20:15:57.901031] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:08.928 20:15:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.928 20:15:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid=caa3dfc1-79db-49e7-95fe-b9f6785698c4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:09.186 20:15:58 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid=caa3dfc1-79db-49e7-95fe-b9f6785698c4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:16:09.186 20:15:58 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:16:09.186 20:15:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1194 -- # local i=0 00:16:09.186 20:15:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:16:09.186 20:15:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:16:09.186 20:15:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1201 -- # sleep 2 00:16:11.719 20:16:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:16:11.719 20:16:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:16:11.719 20:16:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:16:11.719 20:16:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:16:11.719 20:16:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:16:11.719 20:16:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # return 0 00:16:11.719 20:16:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:11.719 [global] 00:16:11.719 thread=1 00:16:11.719 invalidate=1 00:16:11.719 rw=write 00:16:11.719 time_based=1 00:16:11.719 runtime=1 00:16:11.719 ioengine=libaio 00:16:11.719 direct=1 00:16:11.719 bs=4096 00:16:11.719 iodepth=1 00:16:11.719 norandommap=0 00:16:11.719 numjobs=1 00:16:11.719 00:16:11.719 verify_dump=1 00:16:11.719 verify_backlog=512 00:16:11.719 verify_state_save=0 00:16:11.719 do_verify=1 00:16:11.719 verify=crc32c-intel 00:16:11.719 [job0] 00:16:11.719 filename=/dev/nvme0n1 00:16:11.719 Could not set queue depth (nvme0n1) 00:16:11.719 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:11.719 fio-3.35 00:16:11.719 Starting 1 thread 00:16:12.660 00:16:12.660 job0: (groupid=0, jobs=1): err= 0: pid=92352: Sun Jul 14 20:16:01 2024 00:16:12.660 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:16:12.660 slat (nsec): min=13598, max=87767, avg=16904.15, stdev=5712.03 00:16:12.660 clat (usec): min=128, max=259, avg=160.97, stdev=18.37 00:16:12.660 lat (usec): min=143, max=274, avg=177.88, stdev=19.46 00:16:12.660 clat percentiles (usec): 00:16:12.660 | 1.00th=[ 135], 5.00th=[ 141], 10.00th=[ 143], 20.00th=[ 147], 00:16:12.660 | 30.00th=[ 151], 40.00th=[ 153], 50.00th=[ 157], 60.00th=[ 161], 00:16:12.660 | 70.00th=[ 167], 80.00th=[ 174], 90.00th=[ 186], 95.00th=[ 198], 00:16:12.661 | 99.00th=[ 225], 99.50th=[ 231], 99.90th=[ 253], 99.95th=[ 258], 00:16:12.661 | 99.99th=[ 260] 00:16:12.661 write: IOPS=3242, BW=12.7MiB/s (13.3MB/s)(12.7MiB/1001msec); 0 zone resets 00:16:12.661 slat (usec): min=19, max=137, avg=25.23, stdev= 8.36 00:16:12.661 clat (usec): min=84, max=1207, avg=111.27, stdev=26.53 00:16:12.661 lat (usec): min=108, max=1306, avg=136.50, stdev=29.37 00:16:12.661 clat percentiles (usec): 00:16:12.661 | 1.00th=[ 90], 5.00th=[ 93], 10.00th=[ 95], 20.00th=[ 98], 00:16:12.661 | 30.00th=[ 101], 40.00th=[ 104], 50.00th=[ 108], 60.00th=[ 111], 00:16:12.661 | 70.00th=[ 114], 80.00th=[ 120], 90.00th=[ 133], 95.00th=[ 147], 00:16:12.661 | 99.00th=[ 174], 99.50th=[ 182], 99.90th=[ 215], 99.95th=[ 457], 00:16:12.661 | 99.99th=[ 1205] 00:16:12.661 bw ( KiB/s): min=12576, max=12576, per=96.95%, avg=12576.00, stdev= 0.00, samples=1 00:16:12.661 iops : min= 3144, max= 3144, avg=3144.00, stdev= 0.00, samples=1 00:16:12.661 lat (usec) : 100=14.02%, 250=85.87%, 500=0.09% 00:16:12.661 lat (msec) : 2=0.02% 00:16:12.661 cpu : usr=1.90%, sys=9.80%, ctx=6318, majf=0, minf=2 00:16:12.661 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:12.661 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:12.661 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:12.661 issued rwts: total=3072,3246,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:12.661 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:12.661 00:16:12.661 Run status group 0 (all jobs): 00:16:12.661 READ: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:16:12.661 WRITE: bw=12.7MiB/s (13.3MB/s), 12.7MiB/s-12.7MiB/s (13.3MB/s-13.3MB/s), io=12.7MiB (13.3MB), run=1001-1001msec 00:16:12.661 00:16:12.661 Disk stats (read/write): 00:16:12.661 nvme0n1: ios=2671/3072, merge=0/0, ticks=467/392, in_queue=859, util=91.08% 00:16:12.661 20:16:01 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:12.661 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:16:12.661 20:16:01 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:12.661 20:16:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1215 -- # local i=0 00:16:12.661 20:16:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:16:12.661 20:16:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:12.661 20:16:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:16:12.661 20:16:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:12.661 20:16:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # return 0 00:16:12.661 20:16:01 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:16:12.661 20:16:01 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:16:12.661 20:16:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:12.661 20:16:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:16:12.661 20:16:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:12.661 20:16:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:16:12.661 20:16:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:12.661 20:16:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:12.661 rmmod nvme_tcp 00:16:12.661 rmmod nvme_fabrics 00:16:12.661 rmmod nvme_keyring 00:16:12.661 20:16:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:12.919 20:16:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:16:12.919 20:16:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:16:12.919 20:16:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 92248 ']' 00:16:12.919 20:16:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 92248 00:16:12.919 20:16:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@946 -- # '[' -z 92248 ']' 00:16:12.919 20:16:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@950 -- # kill -0 92248 00:16:12.919 20:16:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # uname 00:16:12.919 20:16:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:12.919 20:16:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 92248 00:16:12.919 20:16:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:12.919 20:16:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:12.919 killing process with pid 92248 00:16:12.919 20:16:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@964 -- # echo 'killing process with pid 92248' 00:16:12.919 20:16:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@965 -- # kill 92248 00:16:12.919 20:16:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@970 -- # wait 92248 00:16:13.177 20:16:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:13.177 20:16:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:13.177 20:16:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:13.177 20:16:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:13.177 20:16:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:13.177 20:16:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:13.177 20:16:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:13.177 20:16:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:13.177 20:16:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:13.177 00:16:13.177 real 0m5.884s 00:16:13.177 user 0m19.691s 00:16:13.177 sys 0m1.344s 00:16:13.177 20:16:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:13.177 20:16:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:13.177 ************************************ 00:16:13.177 END TEST nvmf_nmic 00:16:13.177 ************************************ 00:16:13.177 20:16:02 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:13.177 20:16:02 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:13.177 20:16:02 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:13.177 20:16:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:13.177 ************************************ 00:16:13.177 START TEST nvmf_fio_target 00:16:13.177 ************************************ 00:16:13.177 20:16:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:13.436 * Looking for test storage... 00:16:13.436 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:13.436 20:16:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:13.436 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:16:13.436 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:13.436 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:13.436 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:13.436 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:13.436 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:13.436 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:13.436 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:13.436 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:13.436 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:13.436 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:13.436 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:16:13.436 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:16:13.436 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:13.436 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:13.436 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:13.436 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:13.436 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:13.436 20:16:02 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:13.436 20:16:02 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:13.436 20:16:02 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:13.436 20:16:02 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.436 20:16:02 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.437 20:16:02 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.437 20:16:02 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:16:13.437 20:16:02 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.437 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:16:13.437 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:13.437 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:13.437 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:13.437 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:13.437 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:13.437 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:13.437 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:13.437 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:13.437 20:16:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:13.437 20:16:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:13.437 20:16:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:13.437 20:16:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:16:13.437 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:13.437 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:13.437 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:13.437 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:13.437 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:13.437 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:13.437 20:16:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:13.437 20:16:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:13.437 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:13.437 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:13.437 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:13.437 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:13.437 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:13.437 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:13.437 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:13.437 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:13.437 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:13.437 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:13.437 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:13.437 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:13.437 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:13.437 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:13.437 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:13.437 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:13.437 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:13.437 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:13.437 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:13.437 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:13.437 Cannot find device "nvmf_tgt_br" 00:16:13.437 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # true 00:16:13.437 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:13.437 Cannot find device "nvmf_tgt_br2" 00:16:13.437 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # true 00:16:13.437 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:13.437 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:13.437 Cannot find device "nvmf_tgt_br" 00:16:13.437 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # true 00:16:13.437 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:13.437 Cannot find device "nvmf_tgt_br2" 00:16:13.437 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # true 00:16:13.437 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:13.437 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:13.437 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:13.437 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:13.437 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:16:13.437 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:13.437 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:13.437 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:16:13.437 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:13.437 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:13.437 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:13.437 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:13.437 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:13.437 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:13.437 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:13.437 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:13.437 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:13.696 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:13.696 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:13.696 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:13.696 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:13.696 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:13.696 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:13.696 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:13.696 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:13.696 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:13.696 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:13.696 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:13.696 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:13.696 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:13.696 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:13.696 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:13.696 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:13.696 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:16:13.696 00:16:13.696 --- 10.0.0.2 ping statistics --- 00:16:13.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:13.696 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:16:13.696 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:13.696 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:13.696 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.136 ms 00:16:13.696 00:16:13.696 --- 10.0.0.3 ping statistics --- 00:16:13.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:13.696 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:16:13.696 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:13.696 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:13.696 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:16:13.696 00:16:13.696 --- 10.0.0.1 ping statistics --- 00:16:13.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:13.696 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:16:13.696 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:13.696 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@433 -- # return 0 00:16:13.696 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:13.696 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:13.696 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:13.696 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:13.697 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:13.697 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:13.697 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:13.697 20:16:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:16:13.697 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:13.697 20:16:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:13.697 20:16:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.697 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=92537 00:16:13.697 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:13.697 20:16:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 92537 00:16:13.697 20:16:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@827 -- # '[' -z 92537 ']' 00:16:13.697 20:16:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:13.697 20:16:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:13.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:13.697 20:16:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:13.697 20:16:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:13.697 20:16:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.697 [2024-07-14 20:16:02.719812] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:16:13.697 [2024-07-14 20:16:02.719904] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:13.956 [2024-07-14 20:16:02.853544] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:13.956 [2024-07-14 20:16:02.973141] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:13.956 [2024-07-14 20:16:02.973205] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:13.956 [2024-07-14 20:16:02.973216] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:13.956 [2024-07-14 20:16:02.973231] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:13.956 [2024-07-14 20:16:02.973238] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:13.956 [2024-07-14 20:16:02.973436] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:13.956 [2024-07-14 20:16:02.973571] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:13.956 [2024-07-14 20:16:02.974487] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:13.956 [2024-07-14 20:16:02.974548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:14.893 20:16:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:14.893 20:16:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@860 -- # return 0 00:16:14.893 20:16:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:14.893 20:16:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:14.893 20:16:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.893 20:16:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:14.893 20:16:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:15.151 [2024-07-14 20:16:03.983011] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:15.151 20:16:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:15.409 20:16:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:16:15.409 20:16:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:15.667 20:16:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:16:15.667 20:16:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:15.925 20:16:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:16:15.925 20:16:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:16.183 20:16:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:16:16.183 20:16:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:16:16.442 20:16:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:16.713 20:16:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:16:16.713 20:16:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:16.986 20:16:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:16:16.986 20:16:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:17.244 20:16:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:16:17.244 20:16:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:16:17.502 20:16:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:17.761 20:16:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:17.761 20:16:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:18.020 20:16:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:18.020 20:16:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:18.279 20:16:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:18.539 [2024-07-14 20:16:07.456483] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:18.539 20:16:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:16:18.798 20:16:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:16:19.056 20:16:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid=caa3dfc1-79db-49e7-95fe-b9f6785698c4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:19.056 20:16:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:16:19.056 20:16:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1194 -- # local i=0 00:16:19.056 20:16:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:16:19.056 20:16:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1196 -- # [[ -n 4 ]] 00:16:19.056 20:16:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1197 -- # nvme_device_counter=4 00:16:19.056 20:16:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # sleep 2 00:16:21.588 20:16:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:16:21.588 20:16:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:16:21.588 20:16:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:16:21.588 20:16:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_devices=4 00:16:21.588 20:16:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:16:21.588 20:16:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # return 0 00:16:21.588 20:16:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:21.588 [global] 00:16:21.588 thread=1 00:16:21.588 invalidate=1 00:16:21.588 rw=write 00:16:21.588 time_based=1 00:16:21.588 runtime=1 00:16:21.588 ioengine=libaio 00:16:21.588 direct=1 00:16:21.588 bs=4096 00:16:21.588 iodepth=1 00:16:21.588 norandommap=0 00:16:21.588 numjobs=1 00:16:21.588 00:16:21.588 verify_dump=1 00:16:21.588 verify_backlog=512 00:16:21.588 verify_state_save=0 00:16:21.588 do_verify=1 00:16:21.588 verify=crc32c-intel 00:16:21.588 [job0] 00:16:21.588 filename=/dev/nvme0n1 00:16:21.588 [job1] 00:16:21.588 filename=/dev/nvme0n2 00:16:21.588 [job2] 00:16:21.588 filename=/dev/nvme0n3 00:16:21.588 [job3] 00:16:21.588 filename=/dev/nvme0n4 00:16:21.588 Could not set queue depth (nvme0n1) 00:16:21.588 Could not set queue depth (nvme0n2) 00:16:21.588 Could not set queue depth (nvme0n3) 00:16:21.588 Could not set queue depth (nvme0n4) 00:16:21.589 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:21.589 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:21.589 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:21.589 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:21.589 fio-3.35 00:16:21.589 Starting 4 threads 00:16:22.522 00:16:22.522 job0: (groupid=0, jobs=1): err= 0: pid=92832: Sun Jul 14 20:16:11 2024 00:16:22.522 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:16:22.522 slat (nsec): min=11594, max=42226, avg=15293.31, stdev=4159.30 00:16:22.522 clat (usec): min=220, max=524, avg=332.53, stdev=39.45 00:16:22.522 lat (usec): min=234, max=539, avg=347.83, stdev=39.15 00:16:22.522 clat percentiles (usec): 00:16:22.522 | 1.00th=[ 260], 5.00th=[ 277], 10.00th=[ 285], 20.00th=[ 297], 00:16:22.522 | 30.00th=[ 310], 40.00th=[ 318], 50.00th=[ 330], 60.00th=[ 338], 00:16:22.522 | 70.00th=[ 351], 80.00th=[ 363], 90.00th=[ 383], 95.00th=[ 400], 00:16:22.522 | 99.00th=[ 441], 99.50th=[ 474], 99.90th=[ 506], 99.95th=[ 529], 00:16:22.522 | 99.99th=[ 529] 00:16:22.522 write: IOPS=1641, BW=6565KiB/s (6723kB/s)(6572KiB/1001msec); 0 zone resets 00:16:22.522 slat (nsec): min=11826, max=91029, avg=25743.41, stdev=6881.35 00:16:22.522 clat (usec): min=98, max=717, avg=254.38, stdev=41.80 00:16:22.522 lat (usec): min=120, max=740, avg=280.12, stdev=41.46 00:16:22.522 clat percentiles (usec): 00:16:22.522 | 1.00th=[ 155], 5.00th=[ 202], 10.00th=[ 215], 20.00th=[ 225], 00:16:22.522 | 30.00th=[ 233], 40.00th=[ 243], 50.00th=[ 249], 60.00th=[ 258], 00:16:22.522 | 70.00th=[ 269], 80.00th=[ 281], 90.00th=[ 306], 95.00th=[ 330], 00:16:22.522 | 99.00th=[ 375], 99.50th=[ 400], 99.90th=[ 457], 99.95th=[ 717], 00:16:22.522 | 99.99th=[ 717] 00:16:22.522 bw ( KiB/s): min= 8192, max= 8192, per=25.97%, avg=8192.00, stdev= 0.00, samples=1 00:16:22.522 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:16:22.522 lat (usec) : 100=0.03%, 250=26.58%, 500=73.23%, 750=0.16% 00:16:22.522 cpu : usr=1.40%, sys=4.90%, ctx=3182, majf=0, minf=11 00:16:22.522 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:22.522 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:22.522 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:22.522 issued rwts: total=1536,1643,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:22.522 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:22.522 job1: (groupid=0, jobs=1): err= 0: pid=92833: Sun Jul 14 20:16:11 2024 00:16:22.522 read: IOPS=1771, BW=7085KiB/s (7255kB/s)(7092KiB/1001msec) 00:16:22.522 slat (nsec): min=12634, max=49597, avg=17214.91, stdev=4218.31 00:16:22.522 clat (usec): min=207, max=638, avg=267.75, stdev=27.18 00:16:22.523 lat (usec): min=221, max=657, avg=284.97, stdev=27.65 00:16:22.523 clat percentiles (usec): 00:16:22.523 | 1.00th=[ 223], 5.00th=[ 233], 10.00th=[ 239], 20.00th=[ 247], 00:16:22.523 | 30.00th=[ 253], 40.00th=[ 260], 50.00th=[ 265], 60.00th=[ 273], 00:16:22.523 | 70.00th=[ 277], 80.00th=[ 289], 90.00th=[ 302], 95.00th=[ 314], 00:16:22.523 | 99.00th=[ 334], 99.50th=[ 343], 99.90th=[ 594], 99.95th=[ 635], 00:16:22.523 | 99.99th=[ 635] 00:16:22.523 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:16:22.523 slat (usec): min=17, max=143, avg=25.92, stdev= 6.93 00:16:22.523 clat (usec): min=152, max=301, avg=211.92, stdev=23.59 00:16:22.523 lat (usec): min=171, max=401, avg=237.84, stdev=25.38 00:16:22.523 clat percentiles (usec): 00:16:22.523 | 1.00th=[ 163], 5.00th=[ 176], 10.00th=[ 184], 20.00th=[ 192], 00:16:22.523 | 30.00th=[ 198], 40.00th=[ 204], 50.00th=[ 210], 60.00th=[ 217], 00:16:22.523 | 70.00th=[ 223], 80.00th=[ 233], 90.00th=[ 245], 95.00th=[ 253], 00:16:22.523 | 99.00th=[ 269], 99.50th=[ 277], 99.90th=[ 285], 99.95th=[ 285], 00:16:22.523 | 99.99th=[ 302] 00:16:22.523 bw ( KiB/s): min= 8192, max= 8192, per=25.97%, avg=8192.00, stdev= 0.00, samples=1 00:16:22.523 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:16:22.523 lat (usec) : 250=62.31%, 500=37.63%, 750=0.05% 00:16:22.523 cpu : usr=1.60%, sys=6.20%, ctx=3821, majf=0, minf=10 00:16:22.523 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:22.523 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:22.523 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:22.523 issued rwts: total=1773,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:22.523 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:22.523 job2: (groupid=0, jobs=1): err= 0: pid=92834: Sun Jul 14 20:16:11 2024 00:16:22.523 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:16:22.523 slat (nsec): min=11737, max=48849, avg=15920.42, stdev=4168.60 00:16:22.523 clat (usec): min=213, max=610, avg=331.90, stdev=39.28 00:16:22.523 lat (usec): min=229, max=630, avg=347.83, stdev=39.60 00:16:22.523 clat percentiles (usec): 00:16:22.523 | 1.00th=[ 258], 5.00th=[ 277], 10.00th=[ 285], 20.00th=[ 302], 00:16:22.523 | 30.00th=[ 310], 40.00th=[ 318], 50.00th=[ 330], 60.00th=[ 338], 00:16:22.523 | 70.00th=[ 351], 80.00th=[ 363], 90.00th=[ 383], 95.00th=[ 400], 00:16:22.523 | 99.00th=[ 445], 99.50th=[ 461], 99.90th=[ 529], 99.95th=[ 611], 00:16:22.523 | 99.99th=[ 611] 00:16:22.523 write: IOPS=1640, BW=6561KiB/s (6719kB/s)(6568KiB/1001msec); 0 zone resets 00:16:22.523 slat (usec): min=12, max=173, avg=25.68, stdev= 7.80 00:16:22.523 clat (usec): min=113, max=866, avg=254.79, stdev=39.95 00:16:22.523 lat (usec): min=142, max=884, avg=280.47, stdev=39.68 00:16:22.523 clat percentiles (usec): 00:16:22.523 | 1.00th=[ 167], 5.00th=[ 206], 10.00th=[ 212], 20.00th=[ 227], 00:16:22.523 | 30.00th=[ 235], 40.00th=[ 245], 50.00th=[ 251], 60.00th=[ 260], 00:16:22.523 | 70.00th=[ 269], 80.00th=[ 281], 90.00th=[ 302], 95.00th=[ 322], 00:16:22.523 | 99.00th=[ 367], 99.50th=[ 379], 99.90th=[ 433], 99.95th=[ 865], 00:16:22.523 | 99.99th=[ 865] 00:16:22.523 bw ( KiB/s): min= 8192, max= 8192, per=25.97%, avg=8192.00, stdev= 0.00, samples=1 00:16:22.523 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:16:22.523 lat (usec) : 250=24.70%, 500=75.17%, 750=0.09%, 1000=0.03% 00:16:22.523 cpu : usr=1.30%, sys=4.80%, ctx=3181, majf=0, minf=3 00:16:22.523 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:22.523 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:22.523 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:22.523 issued rwts: total=1536,1642,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:22.523 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:22.523 job3: (groupid=0, jobs=1): err= 0: pid=92835: Sun Jul 14 20:16:11 2024 00:16:22.523 read: IOPS=2335, BW=9343KiB/s (9567kB/s)(9352KiB/1001msec) 00:16:22.523 slat (nsec): min=13779, max=62281, avg=16921.67, stdev=4094.67 00:16:22.523 clat (usec): min=148, max=1674, avg=203.41, stdev=44.11 00:16:22.523 lat (usec): min=163, max=1691, avg=220.33, stdev=44.41 00:16:22.523 clat percentiles (usec): 00:16:22.523 | 1.00th=[ 157], 5.00th=[ 163], 10.00th=[ 169], 20.00th=[ 176], 00:16:22.523 | 30.00th=[ 184], 40.00th=[ 192], 50.00th=[ 200], 60.00th=[ 206], 00:16:22.523 | 70.00th=[ 217], 80.00th=[ 227], 90.00th=[ 243], 95.00th=[ 258], 00:16:22.523 | 99.00th=[ 293], 99.50th=[ 310], 99.90th=[ 474], 99.95th=[ 644], 00:16:22.523 | 99.99th=[ 1680] 00:16:22.523 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:16:22.523 slat (nsec): min=20054, max=85959, avg=25678.90, stdev=6166.37 00:16:22.523 clat (usec): min=107, max=2446, avg=160.42, stdev=57.47 00:16:22.523 lat (usec): min=132, max=2471, avg=186.10, stdev=58.26 00:16:22.523 clat percentiles (usec): 00:16:22.523 | 1.00th=[ 115], 5.00th=[ 121], 10.00th=[ 126], 20.00th=[ 133], 00:16:22.523 | 30.00th=[ 139], 40.00th=[ 147], 50.00th=[ 155], 60.00th=[ 163], 00:16:22.523 | 70.00th=[ 174], 80.00th=[ 182], 90.00th=[ 196], 95.00th=[ 212], 00:16:22.523 | 99.00th=[ 251], 99.50th=[ 273], 99.90th=[ 553], 99.95th=[ 1004], 00:16:22.523 | 99.99th=[ 2442] 00:16:22.523 bw ( KiB/s): min=11584, max=11584, per=36.73%, avg=11584.00, stdev= 0.00, samples=1 00:16:22.523 iops : min= 2896, max= 2896, avg=2896.00, stdev= 0.00, samples=1 00:16:22.523 lat (usec) : 250=96.20%, 500=3.67%, 750=0.06% 00:16:22.523 lat (msec) : 2=0.04%, 4=0.02% 00:16:22.523 cpu : usr=1.80%, sys=7.40%, ctx=4898, majf=0, minf=11 00:16:22.523 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:22.523 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:22.523 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:22.523 issued rwts: total=2338,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:22.523 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:22.523 00:16:22.523 Run status group 0 (all jobs): 00:16:22.523 READ: bw=28.0MiB/s (29.4MB/s), 6138KiB/s-9343KiB/s (6285kB/s-9567kB/s), io=28.1MiB (29.4MB), run=1001-1001msec 00:16:22.523 WRITE: bw=30.8MiB/s (32.3MB/s), 6561KiB/s-9.99MiB/s (6719kB/s-10.5MB/s), io=30.8MiB (32.3MB), run=1001-1001msec 00:16:22.523 00:16:22.523 Disk stats (read/write): 00:16:22.523 nvme0n1: ios=1274/1536, merge=0/0, ticks=429/400, in_queue=829, util=87.78% 00:16:22.523 nvme0n2: ios=1576/1757, merge=0/0, ticks=442/393, in_queue=835, util=88.45% 00:16:22.523 nvme0n3: ios=1225/1536, merge=0/0, ticks=414/407, in_queue=821, util=89.26% 00:16:22.523 nvme0n4: ios=2048/2177, merge=0/0, ticks=430/379, in_queue=809, util=89.62% 00:16:22.523 20:16:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:16:22.523 [global] 00:16:22.523 thread=1 00:16:22.523 invalidate=1 00:16:22.523 rw=randwrite 00:16:22.523 time_based=1 00:16:22.523 runtime=1 00:16:22.523 ioengine=libaio 00:16:22.523 direct=1 00:16:22.523 bs=4096 00:16:22.523 iodepth=1 00:16:22.523 norandommap=0 00:16:22.523 numjobs=1 00:16:22.523 00:16:22.523 verify_dump=1 00:16:22.523 verify_backlog=512 00:16:22.523 verify_state_save=0 00:16:22.524 do_verify=1 00:16:22.524 verify=crc32c-intel 00:16:22.524 [job0] 00:16:22.524 filename=/dev/nvme0n1 00:16:22.524 [job1] 00:16:22.524 filename=/dev/nvme0n2 00:16:22.524 [job2] 00:16:22.524 filename=/dev/nvme0n3 00:16:22.524 [job3] 00:16:22.524 filename=/dev/nvme0n4 00:16:22.524 Could not set queue depth (nvme0n1) 00:16:22.524 Could not set queue depth (nvme0n2) 00:16:22.524 Could not set queue depth (nvme0n3) 00:16:22.524 Could not set queue depth (nvme0n4) 00:16:22.783 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:22.783 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:22.783 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:22.783 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:22.783 fio-3.35 00:16:22.783 Starting 4 threads 00:16:24.160 00:16:24.160 job0: (groupid=0, jobs=1): err= 0: pid=92889: Sun Jul 14 20:16:12 2024 00:16:24.160 read: IOPS=1970, BW=7880KiB/s (8069kB/s)(7888KiB/1001msec) 00:16:24.160 slat (nsec): min=10886, max=53024, avg=16314.01, stdev=4790.11 00:16:24.160 clat (usec): min=140, max=937, avg=262.41, stdev=87.06 00:16:24.160 lat (usec): min=156, max=953, avg=278.72, stdev=86.90 00:16:24.160 clat percentiles (usec): 00:16:24.160 | 1.00th=[ 147], 5.00th=[ 155], 10.00th=[ 165], 20.00th=[ 178], 00:16:24.160 | 30.00th=[ 190], 40.00th=[ 206], 50.00th=[ 249], 60.00th=[ 302], 00:16:24.160 | 70.00th=[ 322], 80.00th=[ 343], 90.00th=[ 367], 95.00th=[ 396], 00:16:24.160 | 99.00th=[ 449], 99.50th=[ 498], 99.90th=[ 832], 99.95th=[ 938], 00:16:24.160 | 99.99th=[ 938] 00:16:24.160 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:16:24.160 slat (usec): min=12, max=166, avg=26.36, stdev=10.24 00:16:24.160 clat (usec): min=77, max=746, avg=190.07, stdev=63.42 00:16:24.160 lat (usec): min=122, max=778, avg=216.43, stdev=64.19 00:16:24.160 clat percentiles (usec): 00:16:24.160 | 1.00th=[ 104], 5.00th=[ 112], 10.00th=[ 118], 20.00th=[ 131], 00:16:24.160 | 30.00th=[ 143], 40.00th=[ 153], 50.00th=[ 172], 60.00th=[ 208], 00:16:24.160 | 70.00th=[ 235], 80.00th=[ 251], 90.00th=[ 277], 95.00th=[ 297], 00:16:24.160 | 99.00th=[ 334], 99.50th=[ 359], 99.90th=[ 412], 99.95th=[ 437], 00:16:24.160 | 99.99th=[ 750] 00:16:24.160 bw ( KiB/s): min=11808, max=11808, per=40.06%, avg=11808.00, stdev= 0.00, samples=1 00:16:24.160 iops : min= 2952, max= 2952, avg=2952.00, stdev= 0.00, samples=1 00:16:24.160 lat (usec) : 100=0.07%, 250=65.02%, 500=34.65%, 750=0.17%, 1000=0.07% 00:16:24.160 cpu : usr=1.90%, sys=5.90%, ctx=4041, majf=0, minf=15 00:16:24.160 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:24.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:24.160 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:24.160 issued rwts: total=1972,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:24.160 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:24.160 job1: (groupid=0, jobs=1): err= 0: pid=92890: Sun Jul 14 20:16:12 2024 00:16:24.160 read: IOPS=1778, BW=7113KiB/s (7284kB/s)(7120KiB/1001msec) 00:16:24.160 slat (nsec): min=14420, max=85031, avg=20211.31, stdev=5228.90 00:16:24.160 clat (usec): min=142, max=1248, avg=272.99, stdev=113.57 00:16:24.160 lat (usec): min=160, max=1264, avg=293.20, stdev=114.59 00:16:24.160 clat percentiles (usec): 00:16:24.160 | 1.00th=[ 149], 5.00th=[ 157], 10.00th=[ 163], 20.00th=[ 178], 00:16:24.160 | 30.00th=[ 188], 40.00th=[ 200], 50.00th=[ 217], 60.00th=[ 293], 00:16:24.160 | 70.00th=[ 334], 80.00th=[ 375], 90.00th=[ 437], 95.00th=[ 482], 00:16:24.160 | 99.00th=[ 578], 99.50th=[ 611], 99.90th=[ 644], 99.95th=[ 1254], 00:16:24.160 | 99.99th=[ 1254] 00:16:24.160 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:16:24.160 slat (usec): min=20, max=133, avg=31.58, stdev= 9.23 00:16:24.160 clat (usec): min=81, max=7119, avg=197.71, stdev=187.56 00:16:24.160 lat (usec): min=126, max=7145, avg=229.29, stdev=188.99 00:16:24.160 clat percentiles (usec): 00:16:24.160 | 1.00th=[ 108], 5.00th=[ 114], 10.00th=[ 120], 20.00th=[ 130], 00:16:24.160 | 30.00th=[ 141], 40.00th=[ 153], 50.00th=[ 167], 60.00th=[ 206], 00:16:24.160 | 70.00th=[ 233], 80.00th=[ 251], 90.00th=[ 277], 95.00th=[ 297], 00:16:24.160 | 99.00th=[ 400], 99.50th=[ 449], 99.90th=[ 1565], 99.95th=[ 3228], 00:16:24.160 | 99.99th=[ 7111] 00:16:24.160 bw ( KiB/s): min= 8192, max= 8192, per=27.79%, avg=8192.00, stdev= 0.00, samples=1 00:16:24.160 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:16:24.160 lat (usec) : 100=0.03%, 250=68.99%, 500=28.79%, 750=1.96%, 1000=0.05% 00:16:24.160 lat (msec) : 2=0.13%, 4=0.03%, 10=0.03% 00:16:24.160 cpu : usr=1.90%, sys=7.30%, ctx=3833, majf=0, minf=7 00:16:24.160 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:24.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:24.160 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:24.160 issued rwts: total=1780,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:24.160 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:24.160 job2: (groupid=0, jobs=1): err= 0: pid=92891: Sun Jul 14 20:16:12 2024 00:16:24.160 read: IOPS=1503, BW=6014KiB/s (6158kB/s)(6020KiB/1001msec) 00:16:24.160 slat (usec): min=10, max=594, avg=20.21, stdev=17.22 00:16:24.160 clat (usec): min=206, max=816, avg=340.02, stdev=48.45 00:16:24.160 lat (usec): min=222, max=924, avg=360.23, stdev=51.20 00:16:24.160 clat percentiles (usec): 00:16:24.160 | 1.00th=[ 258], 5.00th=[ 277], 10.00th=[ 289], 20.00th=[ 302], 00:16:24.160 | 30.00th=[ 314], 40.00th=[ 322], 50.00th=[ 334], 60.00th=[ 343], 00:16:24.160 | 70.00th=[ 355], 80.00th=[ 375], 90.00th=[ 400], 95.00th=[ 420], 00:16:24.160 | 99.00th=[ 465], 99.50th=[ 506], 99.90th=[ 766], 99.95th=[ 816], 00:16:24.160 | 99.99th=[ 816] 00:16:24.160 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:16:24.160 slat (usec): min=12, max=155, avg=32.15, stdev=12.53 00:16:24.160 clat (usec): min=72, max=4468, avg=261.30, stdev=156.83 00:16:24.160 lat (usec): min=164, max=4516, avg=293.45, stdev=157.73 00:16:24.160 clat percentiles (usec): 00:16:24.160 | 1.00th=[ 169], 5.00th=[ 200], 10.00th=[ 215], 20.00th=[ 227], 00:16:24.160 | 30.00th=[ 235], 40.00th=[ 243], 50.00th=[ 249], 60.00th=[ 255], 00:16:24.160 | 70.00th=[ 265], 80.00th=[ 277], 90.00th=[ 297], 95.00th=[ 338], 00:16:24.160 | 99.00th=[ 429], 99.50th=[ 611], 99.90th=[ 3916], 99.95th=[ 4490], 00:16:24.160 | 99.99th=[ 4490] 00:16:24.160 bw ( KiB/s): min= 8112, max= 8112, per=27.52%, avg=8112.00, stdev= 0.00, samples=1 00:16:24.160 iops : min= 2028, max= 2028, avg=2028.00, stdev= 0.00, samples=1 00:16:24.160 lat (usec) : 100=0.03%, 250=26.34%, 500=73.04%, 750=0.33%, 1000=0.13% 00:16:24.160 lat (msec) : 2=0.07%, 4=0.03%, 10=0.03% 00:16:24.160 cpu : usr=1.60%, sys=5.70%, ctx=3066, majf=0, minf=11 00:16:24.160 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:24.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:24.160 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:24.160 issued rwts: total=1505,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:24.160 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:24.160 job3: (groupid=0, jobs=1): err= 0: pid=92892: Sun Jul 14 20:16:12 2024 00:16:24.160 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:16:24.160 slat (nsec): min=14813, max=71566, avg=21607.98, stdev=4294.95 00:16:24.160 clat (usec): min=224, max=1230, avg=304.88, stdev=61.23 00:16:24.160 lat (usec): min=245, max=1251, avg=326.49, stdev=60.94 00:16:24.160 clat percentiles (usec): 00:16:24.160 | 1.00th=[ 237], 5.00th=[ 249], 10.00th=[ 258], 20.00th=[ 269], 00:16:24.160 | 30.00th=[ 277], 40.00th=[ 281], 50.00th=[ 289], 60.00th=[ 302], 00:16:24.160 | 70.00th=[ 310], 80.00th=[ 322], 90.00th=[ 359], 95.00th=[ 445], 00:16:24.160 | 99.00th=[ 529], 99.50th=[ 553], 99.90th=[ 635], 99.95th=[ 1237], 00:16:24.160 | 99.99th=[ 1237] 00:16:24.160 write: IOPS=1743, BW=6973KiB/s (7140kB/s)(6980KiB/1001msec); 0 zone resets 00:16:24.160 slat (usec): min=14, max=143, avg=33.28, stdev= 8.51 00:16:24.160 clat (usec): min=177, max=1035, avg=247.50, stdev=45.95 00:16:24.160 lat (usec): min=207, max=1073, avg=280.78, stdev=46.16 00:16:24.160 clat percentiles (usec): 00:16:24.160 | 1.00th=[ 190], 5.00th=[ 202], 10.00th=[ 210], 20.00th=[ 221], 00:16:24.160 | 30.00th=[ 227], 40.00th=[ 233], 50.00th=[ 239], 60.00th=[ 245], 00:16:24.160 | 70.00th=[ 253], 80.00th=[ 265], 90.00th=[ 285], 95.00th=[ 334], 00:16:24.160 | 99.00th=[ 416], 99.50th=[ 441], 99.90th=[ 498], 99.95th=[ 1037], 00:16:24.160 | 99.99th=[ 1037] 00:16:24.160 bw ( KiB/s): min= 8192, max= 8192, per=27.79%, avg=8192.00, stdev= 0.00, samples=1 00:16:24.160 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:16:24.160 lat (usec) : 250=38.22%, 500=60.96%, 750=0.76% 00:16:24.160 lat (msec) : 2=0.06% 00:16:24.160 cpu : usr=1.60%, sys=6.90%, ctx=3282, majf=0, minf=14 00:16:24.160 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:24.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:24.160 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:24.160 issued rwts: total=1536,1745,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:24.160 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:24.160 00:16:24.160 Run status group 0 (all jobs): 00:16:24.160 READ: bw=26.5MiB/s (27.8MB/s), 6014KiB/s-7880KiB/s (6158kB/s-8069kB/s), io=26.5MiB (27.8MB), run=1001-1001msec 00:16:24.160 WRITE: bw=28.8MiB/s (30.2MB/s), 6138KiB/s-8184KiB/s (6285kB/s-8380kB/s), io=28.8MiB (30.2MB), run=1001-1001msec 00:16:24.160 00:16:24.160 Disk stats (read/write): 00:16:24.160 nvme0n1: ios=1671/2048, merge=0/0, ticks=412/407, in_queue=819, util=88.45% 00:16:24.160 nvme0n2: ios=1569/2009, merge=0/0, ticks=404/398, in_queue=802, util=87.96% 00:16:24.160 nvme0n3: ios=1157/1536, merge=0/0, ticks=393/413, in_queue=806, util=88.68% 00:16:24.160 nvme0n4: ios=1409/1536, merge=0/0, ticks=422/390, in_queue=812, util=89.77% 00:16:24.160 20:16:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:16:24.160 [global] 00:16:24.160 thread=1 00:16:24.160 invalidate=1 00:16:24.160 rw=write 00:16:24.160 time_based=1 00:16:24.160 runtime=1 00:16:24.160 ioengine=libaio 00:16:24.160 direct=1 00:16:24.160 bs=4096 00:16:24.160 iodepth=128 00:16:24.160 norandommap=0 00:16:24.160 numjobs=1 00:16:24.160 00:16:24.161 verify_dump=1 00:16:24.161 verify_backlog=512 00:16:24.161 verify_state_save=0 00:16:24.161 do_verify=1 00:16:24.161 verify=crc32c-intel 00:16:24.161 [job0] 00:16:24.161 filename=/dev/nvme0n1 00:16:24.161 [job1] 00:16:24.161 filename=/dev/nvme0n2 00:16:24.161 [job2] 00:16:24.161 filename=/dev/nvme0n3 00:16:24.161 [job3] 00:16:24.161 filename=/dev/nvme0n4 00:16:24.161 Could not set queue depth (nvme0n1) 00:16:24.161 Could not set queue depth (nvme0n2) 00:16:24.161 Could not set queue depth (nvme0n3) 00:16:24.161 Could not set queue depth (nvme0n4) 00:16:24.161 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:24.161 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:24.161 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:24.161 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:24.161 fio-3.35 00:16:24.161 Starting 4 threads 00:16:25.555 00:16:25.555 job0: (groupid=0, jobs=1): err= 0: pid=92952: Sun Jul 14 20:16:14 2024 00:16:25.555 read: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec) 00:16:25.555 slat (usec): min=3, max=8636, avg=104.12, stdev=491.26 00:16:25.555 clat (usec): min=7777, max=35325, avg=13923.55, stdev=6660.15 00:16:25.555 lat (usec): min=7844, max=35340, avg=14027.67, stdev=6706.43 00:16:25.555 clat percentiles (usec): 00:16:25.555 | 1.00th=[ 8291], 5.00th=[ 8979], 10.00th=[ 9503], 20.00th=[10159], 00:16:25.555 | 30.00th=[10552], 40.00th=[10814], 50.00th=[11076], 60.00th=[11338], 00:16:25.555 | 70.00th=[11731], 80.00th=[15008], 90.00th=[26346], 95.00th=[28967], 00:16:25.555 | 99.00th=[32113], 99.50th=[34866], 99.90th=[35390], 99.95th=[35390], 00:16:25.555 | 99.99th=[35390] 00:16:25.555 write: IOPS=4945, BW=19.3MiB/s (20.3MB/s)(19.4MiB/1005msec); 0 zone resets 00:16:25.555 slat (usec): min=4, max=7006, avg=97.67, stdev=417.50 00:16:25.555 clat (usec): min=2281, max=30896, avg=12649.11, stdev=5819.62 00:16:25.555 lat (usec): min=6104, max=32616, avg=12746.78, stdev=5857.74 00:16:25.555 clat percentiles (usec): 00:16:25.555 | 1.00th=[ 7504], 5.00th=[ 8094], 10.00th=[ 8455], 20.00th=[ 9765], 00:16:25.555 | 30.00th=[10159], 40.00th=[10421], 50.00th=[10552], 60.00th=[10683], 00:16:25.555 | 70.00th=[10814], 80.00th=[11600], 90.00th=[25297], 95.00th=[26870], 00:16:25.555 | 99.00th=[27657], 99.50th=[29754], 99.90th=[30540], 99.95th=[30802], 00:16:25.555 | 99.99th=[30802] 00:16:25.555 bw ( KiB/s): min=14160, max=24576, per=29.12%, avg=19368.00, stdev=7365.22, samples=2 00:16:25.555 iops : min= 3540, max= 6144, avg=4842.00, stdev=1841.31, samples=2 00:16:25.555 lat (msec) : 4=0.01%, 10=20.45%, 20=62.06%, 50=17.48% 00:16:25.555 cpu : usr=3.98%, sys=13.55%, ctx=836, majf=0, minf=9 00:16:25.555 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:16:25.555 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:25.555 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:25.555 issued rwts: total=4608,4970,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:25.555 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:25.555 job1: (groupid=0, jobs=1): err= 0: pid=92953: Sun Jul 14 20:16:14 2024 00:16:25.555 read: IOPS=2582, BW=10.1MiB/s (10.6MB/s)(10.1MiB/1004msec) 00:16:25.555 slat (usec): min=3, max=9954, avg=185.13, stdev=866.41 00:16:25.555 clat (usec): min=2501, max=35724, avg=22708.70, stdev=3489.11 00:16:25.555 lat (usec): min=11936, max=35742, avg=22893.82, stdev=3521.73 00:16:25.555 clat percentiles (usec): 00:16:25.555 | 1.00th=[13173], 5.00th=[17433], 10.00th=[18744], 20.00th=[20579], 00:16:25.555 | 30.00th=[21365], 40.00th=[21365], 50.00th=[21890], 60.00th=[22938], 00:16:25.555 | 70.00th=[24249], 80.00th=[25822], 90.00th=[26870], 95.00th=[28443], 00:16:25.555 | 99.00th=[32113], 99.50th=[35390], 99.90th=[35914], 99.95th=[35914], 00:16:25.555 | 99.99th=[35914] 00:16:25.555 write: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec); 0 zone resets 00:16:25.555 slat (usec): min=4, max=9599, avg=162.94, stdev=763.84 00:16:25.555 clat (usec): min=4648, max=35170, avg=22050.17, stdev=4072.68 00:16:25.555 lat (usec): min=4677, max=35186, avg=22213.11, stdev=4127.05 00:16:25.555 clat percentiles (usec): 00:16:25.555 | 1.00th=[10159], 5.00th=[15401], 10.00th=[17695], 20.00th=[20055], 00:16:25.555 | 30.00th=[20579], 40.00th=[21365], 50.00th=[21890], 60.00th=[22414], 00:16:25.555 | 70.00th=[22676], 80.00th=[24511], 90.00th=[27132], 95.00th=[28705], 00:16:25.555 | 99.00th=[34341], 99.50th=[34341], 99.90th=[34866], 99.95th=[34866], 00:16:25.555 | 99.99th=[35390] 00:16:25.555 bw ( KiB/s): min=11528, max=12312, per=17.92%, avg=11920.00, stdev=554.37, samples=2 00:16:25.555 iops : min= 2882, max= 3078, avg=2980.00, stdev=138.59, samples=2 00:16:25.555 lat (msec) : 4=0.02%, 10=0.14%, 20=16.82%, 50=83.02% 00:16:25.555 cpu : usr=3.49%, sys=6.58%, ctx=934, majf=0, minf=9 00:16:25.555 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:16:25.555 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:25.555 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:25.555 issued rwts: total=2593,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:25.555 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:25.555 job2: (groupid=0, jobs=1): err= 0: pid=92955: Sun Jul 14 20:16:14 2024 00:16:25.555 read: IOPS=5454, BW=21.3MiB/s (22.3MB/s)(21.4MiB/1003msec) 00:16:25.555 slat (usec): min=8, max=8428, avg=87.02, stdev=409.33 00:16:25.555 clat (usec): min=2135, max=24121, avg=11472.33, stdev=2388.29 00:16:25.555 lat (usec): min=2157, max=24142, avg=11559.35, stdev=2374.13 00:16:25.555 clat percentiles (usec): 00:16:25.555 | 1.00th=[ 5145], 5.00th=[ 8586], 10.00th=[ 9896], 20.00th=[10159], 00:16:25.555 | 30.00th=[10290], 40.00th=[10421], 50.00th=[11731], 60.00th=[11994], 00:16:25.555 | 70.00th=[12256], 80.00th=[12387], 90.00th=[13042], 95.00th=[14746], 00:16:25.555 | 99.00th=[21890], 99.50th=[23987], 99.90th=[24249], 99.95th=[24249], 00:16:25.555 | 99.99th=[24249] 00:16:25.555 write: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec); 0 zone resets 00:16:25.555 slat (usec): min=11, max=3554, avg=86.03, stdev=352.58 00:16:25.555 clat (usec): min=7926, max=17341, avg=11317.56, stdev=1960.92 00:16:25.555 lat (usec): min=7946, max=17364, avg=11403.58, stdev=1966.34 00:16:25.555 clat percentiles (usec): 00:16:25.555 | 1.00th=[ 8291], 5.00th=[ 8586], 10.00th=[ 8979], 20.00th=[ 9765], 00:16:25.555 | 30.00th=[10290], 40.00th=[10552], 50.00th=[10814], 60.00th=[11338], 00:16:25.555 | 70.00th=[12387], 80.00th=[12911], 90.00th=[13435], 95.00th=[16188], 00:16:25.555 | 99.00th=[16909], 99.50th=[17171], 99.90th=[17433], 99.95th=[17433], 00:16:25.555 | 99.99th=[17433] 00:16:25.555 bw ( KiB/s): min=20480, max=24576, per=33.87%, avg=22528.00, stdev=2896.31, samples=2 00:16:25.555 iops : min= 5120, max= 6144, avg=5632.00, stdev=724.08, samples=2 00:16:25.555 lat (msec) : 4=0.24%, 10=18.54%, 20=80.19%, 50=1.03% 00:16:25.555 cpu : usr=4.49%, sys=14.77%, ctx=675, majf=0, minf=7 00:16:25.555 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:16:25.555 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:25.555 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:25.555 issued rwts: total=5471,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:25.555 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:25.555 job3: (groupid=0, jobs=1): err= 0: pid=92956: Sun Jul 14 20:16:14 2024 00:16:25.555 read: IOPS=2552, BW=9.97MiB/s (10.5MB/s)(10.0MiB/1003msec) 00:16:25.555 slat (usec): min=2, max=9684, avg=180.97, stdev=769.00 00:16:25.555 clat (usec): min=14389, max=32938, avg=23148.10, stdev=3025.04 00:16:25.555 lat (usec): min=14409, max=32955, avg=23329.07, stdev=3054.44 00:16:25.555 clat percentiles (usec): 00:16:25.555 | 1.00th=[14877], 5.00th=[17957], 10.00th=[19792], 20.00th=[21103], 00:16:25.555 | 30.00th=[21365], 40.00th=[21890], 50.00th=[22676], 60.00th=[24249], 00:16:25.555 | 70.00th=[25297], 80.00th=[26084], 90.00th=[26608], 95.00th=[27657], 00:16:25.556 | 99.00th=[29230], 99.50th=[31851], 99.90th=[32375], 99.95th=[32900], 00:16:25.556 | 99.99th=[32900] 00:16:25.556 write: IOPS=3029, BW=11.8MiB/s (12.4MB/s)(11.9MiB/1003msec); 0 zone resets 00:16:25.556 slat (usec): min=4, max=9813, avg=169.92, stdev=839.47 00:16:25.556 clat (usec): min=2213, max=32799, avg=22043.41, stdev=4238.77 00:16:25.556 lat (usec): min=2232, max=32839, avg=22213.33, stdev=4273.82 00:16:25.556 clat percentiles (usec): 00:16:25.556 | 1.00th=[ 5932], 5.00th=[14222], 10.00th=[18220], 20.00th=[20055], 00:16:25.556 | 30.00th=[20841], 40.00th=[21627], 50.00th=[22152], 60.00th=[22676], 00:16:25.556 | 70.00th=[23725], 80.00th=[25560], 90.00th=[27132], 95.00th=[27657], 00:16:25.556 | 99.00th=[30016], 99.50th=[30540], 99.90th=[32375], 99.95th=[32900], 00:16:25.556 | 99.99th=[32900] 00:16:25.556 bw ( KiB/s): min=11008, max=12288, per=17.51%, avg=11648.00, stdev=905.10, samples=2 00:16:25.556 iops : min= 2752, max= 3072, avg=2912.00, stdev=226.27, samples=2 00:16:25.556 lat (msec) : 4=0.54%, 10=0.57%, 20=13.45%, 50=85.44% 00:16:25.556 cpu : usr=2.40%, sys=7.88%, ctx=949, majf=0, minf=14 00:16:25.556 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:16:25.556 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:25.556 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:25.556 issued rwts: total=2560,3039,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:25.556 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:25.556 00:16:25.556 Run status group 0 (all jobs): 00:16:25.556 READ: bw=59.2MiB/s (62.1MB/s), 9.97MiB/s-21.3MiB/s (10.5MB/s-22.3MB/s), io=59.5MiB (62.4MB), run=1003-1005msec 00:16:25.556 WRITE: bw=65.0MiB/s (68.1MB/s), 11.8MiB/s-21.9MiB/s (12.4MB/s-23.0MB/s), io=65.3MiB (68.5MB), run=1003-1005msec 00:16:25.556 00:16:25.556 Disk stats (read/write): 00:16:25.556 nvme0n1: ios=4363/4608, merge=0/0, ticks=15504/14725, in_queue=30229, util=88.16% 00:16:25.556 nvme0n2: ios=2449/2560, merge=0/0, ticks=21008/21692, in_queue=42700, util=88.37% 00:16:25.556 nvme0n3: ios=4614/4743, merge=0/0, ticks=12902/11834, in_queue=24736, util=89.30% 00:16:25.556 nvme0n4: ios=2304/2560, merge=0/0, ticks=20449/22146, in_queue=42595, util=89.44% 00:16:25.556 20:16:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:16:25.556 [global] 00:16:25.556 thread=1 00:16:25.556 invalidate=1 00:16:25.556 rw=randwrite 00:16:25.556 time_based=1 00:16:25.556 runtime=1 00:16:25.556 ioengine=libaio 00:16:25.556 direct=1 00:16:25.556 bs=4096 00:16:25.556 iodepth=128 00:16:25.556 norandommap=0 00:16:25.556 numjobs=1 00:16:25.556 00:16:25.556 verify_dump=1 00:16:25.556 verify_backlog=512 00:16:25.556 verify_state_save=0 00:16:25.556 do_verify=1 00:16:25.556 verify=crc32c-intel 00:16:25.556 [job0] 00:16:25.556 filename=/dev/nvme0n1 00:16:25.556 [job1] 00:16:25.556 filename=/dev/nvme0n2 00:16:25.556 [job2] 00:16:25.556 filename=/dev/nvme0n3 00:16:25.556 [job3] 00:16:25.556 filename=/dev/nvme0n4 00:16:25.556 Could not set queue depth (nvme0n1) 00:16:25.556 Could not set queue depth (nvme0n2) 00:16:25.556 Could not set queue depth (nvme0n3) 00:16:25.556 Could not set queue depth (nvme0n4) 00:16:25.556 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:25.556 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:25.556 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:25.556 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:25.556 fio-3.35 00:16:25.556 Starting 4 threads 00:16:26.935 00:16:26.935 job0: (groupid=0, jobs=1): err= 0: pid=93011: Sun Jul 14 20:16:15 2024 00:16:26.935 read: IOPS=4059, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1009msec) 00:16:26.935 slat (usec): min=2, max=9103, avg=119.65, stdev=615.92 00:16:26.935 clat (usec): min=9149, max=26628, avg=15368.81, stdev=3982.33 00:16:26.935 lat (usec): min=9168, max=26667, avg=15488.47, stdev=4043.77 00:16:26.935 clat percentiles (usec): 00:16:26.935 | 1.00th=[ 9503], 5.00th=[11469], 10.00th=[11600], 20.00th=[11863], 00:16:26.935 | 30.00th=[11994], 40.00th=[12387], 50.00th=[13042], 60.00th=[16319], 00:16:26.935 | 70.00th=[18744], 80.00th=[19792], 90.00th=[20579], 95.00th=[21627], 00:16:26.935 | 99.00th=[25035], 99.50th=[25822], 99.90th=[26346], 99.95th=[26608], 00:16:26.935 | 99.99th=[26608] 00:16:26.935 write: IOPS=4196, BW=16.4MiB/s (17.2MB/s)(16.5MiB/1009msec); 0 zone resets 00:16:26.935 slat (usec): min=5, max=7163, avg=113.37, stdev=435.07 00:16:26.935 clat (usec): min=8104, max=28942, avg=15263.95, stdev=4783.98 00:16:26.935 lat (usec): min=8219, max=28977, avg=15377.33, stdev=4816.40 00:16:26.935 clat percentiles (usec): 00:16:26.935 | 1.00th=[ 8717], 5.00th=[ 9241], 10.00th=[11076], 20.00th=[11600], 00:16:26.935 | 30.00th=[11731], 40.00th=[11863], 50.00th=[12125], 60.00th=[14877], 00:16:26.935 | 70.00th=[20317], 80.00th=[20841], 90.00th=[21103], 95.00th=[22152], 00:16:26.935 | 99.00th=[26608], 99.50th=[27657], 99.90th=[28443], 99.95th=[28443], 00:16:26.935 | 99.99th=[28967] 00:16:26.935 bw ( KiB/s): min=12376, max=20480, per=22.11%, avg=16428.00, stdev=5730.39, samples=2 00:16:26.935 iops : min= 3094, max= 5120, avg=4107.00, stdev=1432.60, samples=2 00:16:26.935 lat (msec) : 10=4.20%, 20=70.59%, 50=25.21% 00:16:26.935 cpu : usr=4.27%, sys=9.72%, ctx=838, majf=0, minf=9 00:16:26.935 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:16:26.935 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.935 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:26.935 issued rwts: total=4096,4234,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:26.935 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:26.935 job1: (groupid=0, jobs=1): err= 0: pid=93012: Sun Jul 14 20:16:15 2024 00:16:26.935 read: IOPS=4063, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1008msec) 00:16:26.935 slat (usec): min=4, max=10836, avg=129.61, stdev=704.20 00:16:26.935 clat (usec): min=4631, max=30153, avg=15972.06, stdev=4882.93 00:16:26.935 lat (usec): min=4645, max=30166, avg=16101.67, stdev=4932.78 00:16:26.935 clat percentiles (usec): 00:16:26.935 | 1.00th=[ 8979], 5.00th=[ 9372], 10.00th=[10028], 20.00th=[11469], 00:16:26.935 | 30.00th=[11731], 40.00th=[13435], 50.00th=[15270], 60.00th=[17695], 00:16:26.935 | 70.00th=[19530], 80.00th=[20579], 90.00th=[22676], 95.00th=[24249], 00:16:26.935 | 99.00th=[27657], 99.50th=[28705], 99.90th=[30016], 99.95th=[30278], 00:16:26.935 | 99.99th=[30278] 00:16:26.935 write: IOPS=4227, BW=16.5MiB/s (17.3MB/s)(16.6MiB/1008msec); 0 zone resets 00:16:26.935 slat (usec): min=5, max=8712, avg=102.94, stdev=387.35 00:16:26.935 clat (usec): min=3725, max=31774, avg=14625.55, stdev=5243.01 00:16:26.935 lat (usec): min=3741, max=31814, avg=14728.49, stdev=5286.30 00:16:26.935 clat percentiles (usec): 00:16:26.935 | 1.00th=[ 4621], 5.00th=[ 6587], 10.00th=[ 9241], 20.00th=[11076], 00:16:26.935 | 30.00th=[11863], 40.00th=[12125], 50.00th=[12256], 60.00th=[12780], 00:16:26.935 | 70.00th=[20055], 80.00th=[20841], 90.00th=[21103], 95.00th=[22414], 00:16:26.935 | 99.00th=[26870], 99.50th=[28705], 99.90th=[30016], 99.95th=[30540], 00:16:26.935 | 99.99th=[31851] 00:16:26.935 bw ( KiB/s): min=12224, max=20881, per=22.27%, avg=16552.50, stdev=6121.42, samples=2 00:16:26.935 iops : min= 3056, max= 5220, avg=4138.00, stdev=1530.18, samples=2 00:16:26.935 lat (msec) : 4=0.14%, 10=10.72%, 20=60.91%, 50=28.23% 00:16:26.935 cpu : usr=3.87%, sys=10.53%, ctx=1004, majf=0, minf=7 00:16:26.935 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:16:26.935 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.935 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:26.935 issued rwts: total=4096,4261,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:26.935 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:26.935 job2: (groupid=0, jobs=1): err= 0: pid=93013: Sun Jul 14 20:16:15 2024 00:16:26.935 read: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec) 00:16:26.935 slat (usec): min=8, max=5419, avg=95.40, stdev=459.55 00:16:26.935 clat (usec): min=3443, max=17342, avg=12506.26, stdev=1746.40 00:16:26.935 lat (usec): min=3456, max=17377, avg=12601.65, stdev=1781.35 00:16:26.935 clat percentiles (usec): 00:16:26.935 | 1.00th=[ 7701], 5.00th=[ 9503], 10.00th=[10945], 20.00th=[11338], 00:16:26.935 | 30.00th=[11600], 40.00th=[11994], 50.00th=[12387], 60.00th=[13042], 00:16:26.935 | 70.00th=[13435], 80.00th=[13829], 90.00th=[14615], 95.00th=[15139], 00:16:26.935 | 99.00th=[16581], 99.50th=[16712], 99.90th=[17171], 99.95th=[17433], 00:16:26.935 | 99.99th=[17433] 00:16:26.935 write: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1004msec); 0 zone resets 00:16:26.935 slat (usec): min=11, max=5501, avg=91.49, stdev=458.38 00:16:26.935 clat (usec): min=3105, max=18998, avg=12247.84, stdev=1462.24 00:16:26.935 lat (usec): min=3145, max=19056, avg=12339.33, stdev=1505.33 00:16:26.935 clat percentiles (usec): 00:16:26.935 | 1.00th=[ 7635], 5.00th=[ 9765], 10.00th=[10683], 20.00th=[11469], 00:16:26.935 | 30.00th=[11731], 40.00th=[12125], 50.00th=[12387], 60.00th=[12649], 00:16:26.935 | 70.00th=[12911], 80.00th=[13173], 90.00th=[13566], 95.00th=[14222], 00:16:26.935 | 99.00th=[16581], 99.50th=[17171], 99.90th=[18220], 99.95th=[18744], 00:16:26.935 | 99.99th=[19006] 00:16:26.935 bw ( KiB/s): min=20480, max=20521, per=27.59%, avg=20500.50, stdev=28.99, samples=2 00:16:26.935 iops : min= 5120, max= 5130, avg=5125.00, stdev= 7.07, samples=2 00:16:26.935 lat (msec) : 4=0.31%, 10=5.56%, 20=94.13% 00:16:26.935 cpu : usr=4.59%, sys=15.75%, ctx=525, majf=0, minf=14 00:16:26.935 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:16:26.935 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.935 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:26.935 issued rwts: total=5120,5130,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:26.935 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:26.935 job3: (groupid=0, jobs=1): err= 0: pid=93014: Sun Jul 14 20:16:15 2024 00:16:26.935 read: IOPS=5077, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1007msec) 00:16:26.935 slat (usec): min=4, max=12188, avg=101.34, stdev=653.23 00:16:26.935 clat (usec): min=2588, max=25157, avg=13192.18, stdev=3246.52 00:16:26.935 lat (usec): min=4956, max=25182, avg=13293.52, stdev=3278.00 00:16:26.935 clat percentiles (usec): 00:16:26.935 | 1.00th=[ 6849], 5.00th=[ 9372], 10.00th=[ 9765], 20.00th=[11076], 00:16:26.935 | 30.00th=[11469], 40.00th=[11731], 50.00th=[12649], 60.00th=[13435], 00:16:26.935 | 70.00th=[13960], 80.00th=[14746], 90.00th=[17695], 95.00th=[20055], 00:16:26.935 | 99.00th=[23725], 99.50th=[24511], 99.90th=[25035], 99.95th=[25035], 00:16:26.935 | 99.99th=[25035] 00:16:26.935 write: IOPS=5084, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1007msec); 0 zone resets 00:16:26.935 slat (usec): min=5, max=10336, avg=86.64, stdev=490.89 00:16:26.935 clat (usec): min=4348, max=25005, avg=11720.42, stdev=2365.89 00:16:26.935 lat (usec): min=4385, max=25015, avg=11807.06, stdev=2418.02 00:16:26.935 clat percentiles (usec): 00:16:26.935 | 1.00th=[ 5014], 5.00th=[ 6325], 10.00th=[ 8291], 20.00th=[10552], 00:16:26.935 | 30.00th=[11338], 40.00th=[11731], 50.00th=[12256], 60.00th=[12518], 00:16:26.935 | 70.00th=[12911], 80.00th=[13566], 90.00th=[13960], 95.00th=[14222], 00:16:26.935 | 99.00th=[15008], 99.50th=[17957], 99.90th=[24773], 99.95th=[25035], 00:16:26.935 | 99.99th=[25035] 00:16:26.935 bw ( KiB/s): min=20480, max=20521, per=27.59%, avg=20500.50, stdev=28.99, samples=2 00:16:26.935 iops : min= 5120, max= 5130, avg=5125.00, stdev= 7.07, samples=2 00:16:26.935 lat (msec) : 4=0.01%, 10=15.66%, 20=81.56%, 50=2.78% 00:16:26.935 cpu : usr=4.97%, sys=13.92%, ctx=622, majf=0, minf=11 00:16:26.935 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:16:26.935 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.935 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:26.935 issued rwts: total=5113,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:26.935 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:26.935 00:16:26.935 Run status group 0 (all jobs): 00:16:26.935 READ: bw=71.3MiB/s (74.8MB/s), 15.9MiB/s-19.9MiB/s (16.6MB/s-20.9MB/s), io=72.0MiB (75.5MB), run=1004-1009msec 00:16:26.935 WRITE: bw=72.6MiB/s (76.1MB/s), 16.4MiB/s-20.0MiB/s (17.2MB/s-20.9MB/s), io=73.2MiB (76.8MB), run=1004-1009msec 00:16:26.935 00:16:26.935 Disk stats (read/write): 00:16:26.935 nvme0n1: ios=3634/3855, merge=0/0, ticks=19581/20951, in_queue=40532, util=87.27% 00:16:26.935 nvme0n2: ios=3630/3888, merge=0/0, ticks=40399/39480, in_queue=79879, util=89.59% 00:16:26.935 nvme0n3: ios=4158/4608, merge=0/0, ticks=24738/24361, in_queue=49099, util=89.31% 00:16:26.935 nvme0n4: ios=4102/4607, merge=0/0, ticks=50978/52148, in_queue=103126, util=89.77% 00:16:26.935 20:16:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:16:26.935 20:16:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=93027 00:16:26.935 20:16:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:16:26.935 20:16:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:16:26.935 [global] 00:16:26.935 thread=1 00:16:26.935 invalidate=1 00:16:26.935 rw=read 00:16:26.935 time_based=1 00:16:26.935 runtime=10 00:16:26.935 ioengine=libaio 00:16:26.935 direct=1 00:16:26.935 bs=4096 00:16:26.935 iodepth=1 00:16:26.935 norandommap=1 00:16:26.935 numjobs=1 00:16:26.935 00:16:26.935 [job0] 00:16:26.935 filename=/dev/nvme0n1 00:16:26.935 [job1] 00:16:26.935 filename=/dev/nvme0n2 00:16:26.936 [job2] 00:16:26.936 filename=/dev/nvme0n3 00:16:26.936 [job3] 00:16:26.936 filename=/dev/nvme0n4 00:16:26.936 Could not set queue depth (nvme0n1) 00:16:26.936 Could not set queue depth (nvme0n2) 00:16:26.936 Could not set queue depth (nvme0n3) 00:16:26.936 Could not set queue depth (nvme0n4) 00:16:26.936 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:26.936 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:26.936 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:26.936 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:26.936 fio-3.35 00:16:26.936 Starting 4 threads 00:16:30.222 20:16:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:16:30.222 fio: pid=93076, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:30.222 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=39374848, buflen=4096 00:16:30.222 20:16:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:16:30.222 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=38965248, buflen=4096 00:16:30.222 fio: pid=93075, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:30.222 20:16:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:30.222 20:16:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:16:30.481 fio: pid=93073, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:30.481 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=51388416, buflen=4096 00:16:30.481 20:16:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:30.481 20:16:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:16:30.740 fio: pid=93074, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:30.740 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=57798656, buflen=4096 00:16:30.740 00:16:30.740 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=93073: Sun Jul 14 20:16:19 2024 00:16:30.740 read: IOPS=3771, BW=14.7MiB/s (15.4MB/s)(49.0MiB/3327msec) 00:16:30.740 slat (usec): min=8, max=9493, avg=17.80, stdev=152.66 00:16:30.740 clat (usec): min=115, max=2426, avg=245.88, stdev=37.47 00:16:30.741 lat (usec): min=128, max=10009, avg=263.68, stdev=158.17 00:16:30.741 clat percentiles (usec): 00:16:30.741 | 1.00th=[ 155], 5.00th=[ 194], 10.00th=[ 219], 20.00th=[ 229], 00:16:30.741 | 30.00th=[ 233], 40.00th=[ 239], 50.00th=[ 245], 60.00th=[ 251], 00:16:30.741 | 70.00th=[ 260], 80.00th=[ 269], 90.00th=[ 281], 95.00th=[ 293], 00:16:30.741 | 99.00th=[ 322], 99.50th=[ 334], 99.90th=[ 461], 99.95th=[ 519], 00:16:30.741 | 99.99th=[ 644] 00:16:30.741 bw ( KiB/s): min=14544, max=16056, per=29.62%, avg=15093.67, stdev=505.99, samples=6 00:16:30.741 iops : min= 3636, max= 4014, avg=3773.33, stdev=126.53, samples=6 00:16:30.741 lat (usec) : 250=57.85%, 500=42.06%, 750=0.07% 00:16:30.741 lat (msec) : 4=0.01% 00:16:30.741 cpu : usr=1.02%, sys=4.90%, ctx=12590, majf=0, minf=1 00:16:30.741 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:30.741 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:30.741 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:30.741 issued rwts: total=12547,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:30.741 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:30.741 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=93074: Sun Jul 14 20:16:19 2024 00:16:30.741 read: IOPS=3926, BW=15.3MiB/s (16.1MB/s)(55.1MiB/3594msec) 00:16:30.741 slat (usec): min=8, max=13674, avg=18.66, stdev=193.17 00:16:30.741 clat (usec): min=77, max=3107, avg=234.55, stdev=62.21 00:16:30.741 lat (usec): min=125, max=13926, avg=253.22, stdev=207.18 00:16:30.741 clat percentiles (usec): 00:16:30.741 | 1.00th=[ 124], 5.00th=[ 131], 10.00th=[ 149], 20.00th=[ 219], 00:16:30.741 | 30.00th=[ 229], 40.00th=[ 235], 50.00th=[ 243], 60.00th=[ 249], 00:16:30.741 | 70.00th=[ 258], 80.00th=[ 265], 90.00th=[ 281], 95.00th=[ 293], 00:16:30.741 | 99.00th=[ 318], 99.50th=[ 334], 99.90th=[ 441], 99.95th=[ 578], 00:16:30.741 | 99.99th=[ 3032] 00:16:30.741 bw ( KiB/s): min=14520, max=16360, per=29.71%, avg=15139.00, stdev=628.12, samples=6 00:16:30.741 iops : min= 3630, max= 4090, avg=3784.67, stdev=157.06, samples=6 00:16:30.741 lat (usec) : 100=0.01%, 250=61.56%, 500=38.36%, 750=0.04% 00:16:30.741 lat (msec) : 2=0.01%, 4=0.02% 00:16:30.741 cpu : usr=1.14%, sys=4.95%, ctx=14184, majf=0, minf=1 00:16:30.741 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:30.741 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:30.741 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:30.741 issued rwts: total=14112,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:30.741 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:30.741 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=93075: Sun Jul 14 20:16:19 2024 00:16:30.741 read: IOPS=3059, BW=11.9MiB/s (12.5MB/s)(37.2MiB/3110msec) 00:16:30.741 slat (usec): min=8, max=7775, avg=18.36, stdev=110.20 00:16:30.741 clat (usec): min=53, max=158917, avg=306.87, stdev=1627.22 00:16:30.741 lat (usec): min=163, max=158932, avg=325.23, stdev=1630.88 00:16:30.741 clat percentiles (usec): 00:16:30.741 | 1.00th=[ 212], 5.00th=[ 258], 10.00th=[ 262], 20.00th=[ 269], 00:16:30.741 | 30.00th=[ 277], 40.00th=[ 281], 50.00th=[ 285], 60.00th=[ 293], 00:16:30.741 | 70.00th=[ 302], 80.00th=[ 310], 90.00th=[ 322], 95.00th=[ 334], 00:16:30.741 | 99.00th=[ 371], 99.50th=[ 474], 99.90th=[ 783], 99.95th=[ 1037], 00:16:30.741 | 99.99th=[158335] 00:16:30.741 bw ( KiB/s): min= 8560, max=13088, per=23.97%, avg=12215.67, stdev=1795.22, samples=6 00:16:30.741 iops : min= 2140, max= 3272, avg=3053.83, stdev=448.76, samples=6 00:16:30.741 lat (usec) : 100=0.01%, 250=2.43%, 500=97.15%, 750=0.29%, 1000=0.04% 00:16:30.741 lat (msec) : 2=0.03%, 4=0.02%, 250=0.01% 00:16:30.741 cpu : usr=1.09%, sys=4.02%, ctx=9529, majf=0, minf=1 00:16:30.741 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:30.741 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:30.741 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:30.741 issued rwts: total=9514,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:30.741 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:30.741 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=93076: Sun Jul 14 20:16:19 2024 00:16:30.741 read: IOPS=3325, BW=13.0MiB/s (13.6MB/s)(37.6MiB/2891msec) 00:16:30.741 slat (usec): min=12, max=131, avg=16.61, stdev= 4.34 00:16:30.741 clat (usec): min=148, max=4661, avg=282.41, stdev=75.27 00:16:30.741 lat (usec): min=162, max=4677, avg=299.01, stdev=75.68 00:16:30.741 clat percentiles (usec): 00:16:30.741 | 1.00th=[ 159], 5.00th=[ 172], 10.00th=[ 251], 20.00th=[ 265], 00:16:30.741 | 30.00th=[ 273], 40.00th=[ 277], 50.00th=[ 285], 60.00th=[ 289], 00:16:30.741 | 70.00th=[ 297], 80.00th=[ 306], 90.00th=[ 322], 95.00th=[ 334], 00:16:30.741 | 99.00th=[ 375], 99.50th=[ 449], 99.90th=[ 988], 99.95th=[ 1647], 00:16:30.741 | 99.99th=[ 4686] 00:16:30.741 bw ( KiB/s): min=12800, max=13024, per=25.38%, avg=12934.00, stdev=87.84, samples=5 00:16:30.741 iops : min= 3200, max= 3256, avg=3233.40, stdev=21.90, samples=5 00:16:30.741 lat (usec) : 250=9.88%, 500=89.72%, 750=0.25%, 1000=0.05% 00:16:30.741 lat (msec) : 2=0.06%, 4=0.01%, 10=0.01% 00:16:30.741 cpu : usr=1.14%, sys=4.33%, ctx=9614, majf=0, minf=1 00:16:30.741 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:30.741 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:30.741 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:30.741 issued rwts: total=9614,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:30.741 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:30.741 00:16:30.741 Run status group 0 (all jobs): 00:16:30.741 READ: bw=49.8MiB/s (52.2MB/s), 11.9MiB/s-15.3MiB/s (12.5MB/s-16.1MB/s), io=179MiB (188MB), run=2891-3594msec 00:16:30.741 00:16:30.741 Disk stats (read/write): 00:16:30.741 nvme0n1: ios=11674/0, merge=0/0, ticks=2894/0, in_queue=2894, util=95.44% 00:16:30.741 nvme0n2: ios=12684/0, merge=0/0, ticks=3076/0, in_queue=3076, util=95.24% 00:16:30.741 nvme0n3: ios=8777/0, merge=0/0, ticks=2591/0, in_queue=2591, util=96.60% 00:16:30.741 nvme0n4: ios=9514/0, merge=0/0, ticks=2720/0, in_queue=2720, util=96.79% 00:16:30.741 20:16:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:30.741 20:16:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:16:31.000 20:16:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:31.000 20:16:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:16:31.259 20:16:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:31.259 20:16:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:16:31.518 20:16:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:31.518 20:16:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:16:31.777 20:16:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:31.777 20:16:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:16:32.037 20:16:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:16:32.037 20:16:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 93027 00:16:32.037 20:16:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:16:32.037 20:16:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:32.037 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:32.037 20:16:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:32.037 20:16:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1215 -- # local i=0 00:16:32.037 20:16:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:16:32.037 20:16:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:32.037 20:16:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:16:32.037 20:16:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:32.296 nvmf hotplug test: fio failed as expected 00:16:32.296 20:16:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # return 0 00:16:32.296 20:16:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:16:32.296 20:16:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:16:32.296 20:16:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:32.296 20:16:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:16:32.296 20:16:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:16:32.296 20:16:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:16:32.296 20:16:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:16:32.296 20:16:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:16:32.296 20:16:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:32.296 20:16:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:16:32.296 20:16:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:32.296 20:16:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:16:32.296 20:16:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:32.296 20:16:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:32.296 rmmod nvme_tcp 00:16:32.296 rmmod nvme_fabrics 00:16:32.555 rmmod nvme_keyring 00:16:32.555 20:16:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:32.555 20:16:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:16:32.555 20:16:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:16:32.555 20:16:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 92537 ']' 00:16:32.555 20:16:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 92537 00:16:32.555 20:16:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@946 -- # '[' -z 92537 ']' 00:16:32.555 20:16:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@950 -- # kill -0 92537 00:16:32.555 20:16:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # uname 00:16:32.555 20:16:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:32.555 20:16:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 92537 00:16:32.555 killing process with pid 92537 00:16:32.555 20:16:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:32.555 20:16:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:32.555 20:16:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 92537' 00:16:32.555 20:16:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@965 -- # kill 92537 00:16:32.555 20:16:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@970 -- # wait 92537 00:16:32.815 20:16:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:32.815 20:16:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:32.815 20:16:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:32.815 20:16:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:32.815 20:16:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:32.815 20:16:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:32.815 20:16:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:32.815 20:16:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:32.815 20:16:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:32.815 00:16:32.815 real 0m19.567s 00:16:32.815 user 1m15.445s 00:16:32.815 sys 0m8.188s 00:16:32.815 20:16:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:32.815 ************************************ 00:16:32.815 END TEST nvmf_fio_target 00:16:32.815 ************************************ 00:16:32.815 20:16:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.815 20:16:21 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:32.815 20:16:21 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:32.815 20:16:21 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:32.815 20:16:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:32.815 ************************************ 00:16:32.815 START TEST nvmf_bdevio 00:16:32.815 ************************************ 00:16:32.815 20:16:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:33.074 * Looking for test storage... 00:16:33.074 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:33.074 20:16:21 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:33.074 20:16:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:16:33.074 20:16:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:33.074 20:16:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:33.074 20:16:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:33.074 20:16:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:33.074 20:16:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:33.074 20:16:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:33.074 20:16:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:33.074 20:16:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:33.074 20:16:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:33.074 20:16:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:33.074 20:16:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:16:33.074 20:16:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:16:33.074 20:16:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:33.074 20:16:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:33.074 20:16:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:33.074 20:16:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:33.074 20:16:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:33.074 20:16:21 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:33.074 20:16:21 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:33.075 20:16:21 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:33.075 20:16:21 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.075 20:16:21 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.075 20:16:21 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.075 20:16:21 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:16:33.075 20:16:21 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.075 20:16:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:16:33.075 20:16:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:33.075 20:16:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:33.075 20:16:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:33.075 20:16:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:33.075 20:16:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:33.075 20:16:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:33.075 20:16:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:33.075 20:16:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:33.075 20:16:21 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:33.075 20:16:21 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:33.075 20:16:21 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:16:33.075 20:16:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:33.075 20:16:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:33.075 20:16:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:33.075 20:16:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:33.075 20:16:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:33.075 20:16:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:33.075 20:16:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:33.075 20:16:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:33.075 20:16:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:33.075 20:16:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:33.075 20:16:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:33.075 20:16:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:33.075 20:16:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:33.075 20:16:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:33.075 20:16:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:33.075 20:16:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:33.075 20:16:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:33.075 20:16:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:33.075 20:16:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:33.075 20:16:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:33.075 20:16:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:33.075 20:16:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:33.075 20:16:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:33.075 20:16:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:33.075 20:16:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:33.075 20:16:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:33.075 20:16:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:33.075 20:16:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:33.075 Cannot find device "nvmf_tgt_br" 00:16:33.075 20:16:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # true 00:16:33.075 20:16:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:33.075 Cannot find device "nvmf_tgt_br2" 00:16:33.075 20:16:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # true 00:16:33.075 20:16:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:33.075 20:16:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:33.075 Cannot find device "nvmf_tgt_br" 00:16:33.075 20:16:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # true 00:16:33.075 20:16:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:33.075 Cannot find device "nvmf_tgt_br2" 00:16:33.075 20:16:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # true 00:16:33.075 20:16:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:33.075 20:16:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:33.075 20:16:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:33.075 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:33.075 20:16:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:16:33.075 20:16:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:33.075 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:33.075 20:16:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:16:33.075 20:16:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:33.075 20:16:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:33.075 20:16:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:33.075 20:16:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:33.075 20:16:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:33.075 20:16:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:33.075 20:16:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:33.335 20:16:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:33.335 20:16:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:33.335 20:16:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:33.335 20:16:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:33.335 20:16:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:33.335 20:16:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:33.335 20:16:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:33.335 20:16:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:33.335 20:16:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:33.335 20:16:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:33.335 20:16:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:33.335 20:16:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:33.335 20:16:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:33.335 20:16:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:33.335 20:16:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:33.335 20:16:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:33.335 20:16:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:33.335 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:33.335 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:16:33.335 00:16:33.335 --- 10.0.0.2 ping statistics --- 00:16:33.335 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:33.335 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:16:33.335 20:16:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:33.335 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:33.335 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.092 ms 00:16:33.335 00:16:33.335 --- 10.0.0.3 ping statistics --- 00:16:33.335 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:33.335 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:16:33.335 20:16:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:33.335 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:33.335 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:16:33.335 00:16:33.335 --- 10.0.0.1 ping statistics --- 00:16:33.335 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:33.335 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:16:33.335 20:16:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:33.335 20:16:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@433 -- # return 0 00:16:33.335 20:16:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:33.335 20:16:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:33.335 20:16:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:33.335 20:16:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:33.335 20:16:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:33.335 20:16:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:33.335 20:16:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:33.335 20:16:22 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:33.335 20:16:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:33.335 20:16:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:33.335 20:16:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:33.335 20:16:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=93398 00:16:33.335 20:16:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:16:33.335 20:16:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 93398 00:16:33.335 20:16:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@827 -- # '[' -z 93398 ']' 00:16:33.335 20:16:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:33.335 20:16:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:33.335 20:16:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:33.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:33.335 20:16:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:33.335 20:16:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:33.335 [2024-07-14 20:16:22.384284] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:16:33.335 [2024-07-14 20:16:22.384361] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:33.595 [2024-07-14 20:16:22.517378] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:33.595 [2024-07-14 20:16:22.610394] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:33.595 [2024-07-14 20:16:22.611047] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:33.595 [2024-07-14 20:16:22.611539] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:33.595 [2024-07-14 20:16:22.612003] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:33.595 [2024-07-14 20:16:22.612242] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:33.595 [2024-07-14 20:16:22.612750] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:33.595 [2024-07-14 20:16:22.612915] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:16:33.595 [2024-07-14 20:16:22.613215] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:16:33.595 [2024-07-14 20:16:22.613219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:34.531 20:16:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:34.531 20:16:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@860 -- # return 0 00:16:34.531 20:16:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:34.531 20:16:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:34.531 20:16:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:34.531 20:16:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:34.531 20:16:23 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:34.531 20:16:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.531 20:16:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:34.531 [2024-07-14 20:16:23.460546] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:34.531 20:16:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.531 20:16:23 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:34.531 20:16:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.531 20:16:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:34.531 Malloc0 00:16:34.531 20:16:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.531 20:16:23 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:34.531 20:16:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.531 20:16:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:34.531 20:16:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.531 20:16:23 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:34.531 20:16:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.531 20:16:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:34.531 20:16:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.531 20:16:23 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:34.531 20:16:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.531 20:16:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:34.531 [2024-07-14 20:16:23.542951] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:34.531 20:16:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.531 20:16:23 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:16:34.531 20:16:23 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:34.531 20:16:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:16:34.531 20:16:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:16:34.531 20:16:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:34.531 20:16:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:34.531 { 00:16:34.531 "params": { 00:16:34.531 "name": "Nvme$subsystem", 00:16:34.531 "trtype": "$TEST_TRANSPORT", 00:16:34.531 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:34.531 "adrfam": "ipv4", 00:16:34.531 "trsvcid": "$NVMF_PORT", 00:16:34.531 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:34.531 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:34.531 "hdgst": ${hdgst:-false}, 00:16:34.531 "ddgst": ${ddgst:-false} 00:16:34.531 }, 00:16:34.531 "method": "bdev_nvme_attach_controller" 00:16:34.531 } 00:16:34.531 EOF 00:16:34.531 )") 00:16:34.531 20:16:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:16:34.531 20:16:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:16:34.531 20:16:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:16:34.531 20:16:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:34.531 "params": { 00:16:34.531 "name": "Nvme1", 00:16:34.531 "trtype": "tcp", 00:16:34.531 "traddr": "10.0.0.2", 00:16:34.531 "adrfam": "ipv4", 00:16:34.531 "trsvcid": "4420", 00:16:34.531 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:34.531 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:34.532 "hdgst": false, 00:16:34.532 "ddgst": false 00:16:34.532 }, 00:16:34.532 "method": "bdev_nvme_attach_controller" 00:16:34.532 }' 00:16:34.532 [2024-07-14 20:16:23.608880] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:16:34.532 [2024-07-14 20:16:23.609481] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93453 ] 00:16:34.790 [2024-07-14 20:16:23.753742] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:35.048 [2024-07-14 20:16:23.871975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:35.048 [2024-07-14 20:16:23.872124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:35.048 [2024-07-14 20:16:23.872130] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:35.048 I/O targets: 00:16:35.048 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:35.048 00:16:35.048 00:16:35.048 CUnit - A unit testing framework for C - Version 2.1-3 00:16:35.048 http://cunit.sourceforge.net/ 00:16:35.048 00:16:35.048 00:16:35.048 Suite: bdevio tests on: Nvme1n1 00:16:35.306 Test: blockdev write read block ...passed 00:16:35.306 Test: blockdev write zeroes read block ...passed 00:16:35.306 Test: blockdev write zeroes read no split ...passed 00:16:35.306 Test: blockdev write zeroes read split ...passed 00:16:35.306 Test: blockdev write zeroes read split partial ...passed 00:16:35.306 Test: blockdev reset ...[2024-07-14 20:16:24.214918] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:35.306 [2024-07-14 20:16:24.215084] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10bfe20 (9): Bad file descriptor 00:16:35.306 [2024-07-14 20:16:24.226047] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:35.306 passed 00:16:35.306 Test: blockdev write read 8 blocks ...passed 00:16:35.306 Test: blockdev write read size > 128k ...passed 00:16:35.306 Test: blockdev write read invalid size ...passed 00:16:35.306 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:35.306 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:35.306 Test: blockdev write read max offset ...passed 00:16:35.306 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:35.306 Test: blockdev writev readv 8 blocks ...passed 00:16:35.306 Test: blockdev writev readv 30 x 1block ...passed 00:16:35.564 Test: blockdev writev readv block ...passed 00:16:35.564 Test: blockdev writev readv size > 128k ...passed 00:16:35.564 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:35.564 Test: blockdev comparev and writev ...[2024-07-14 20:16:24.400180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:35.564 [2024-07-14 20:16:24.400257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:35.564 [2024-07-14 20:16:24.400286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:35.564 [2024-07-14 20:16:24.400297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:35.564 [2024-07-14 20:16:24.400797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:35.564 [2024-07-14 20:16:24.400823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:35.564 [2024-07-14 20:16:24.400841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:35.564 [2024-07-14 20:16:24.400851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:35.564 [2024-07-14 20:16:24.401224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:35.564 [2024-07-14 20:16:24.401250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:35.564 [2024-07-14 20:16:24.401267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:35.564 [2024-07-14 20:16:24.401277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:35.564 [2024-07-14 20:16:24.401744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:35.564 [2024-07-14 20:16:24.401774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:35.564 [2024-07-14 20:16:24.401792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:35.564 [2024-07-14 20:16:24.401811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:35.564 passed 00:16:35.564 Test: blockdev nvme passthru rw ...passed 00:16:35.564 Test: blockdev nvme passthru vendor specific ...[2024-07-14 20:16:24.484211] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:35.564 [2024-07-14 20:16:24.484266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:35.564 [2024-07-14 20:16:24.484427] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:35.564 [2024-07-14 20:16:24.484446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:35.564 [2024-07-14 20:16:24.484576] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:35.564 [2024-07-14 20:16:24.484593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:35.565 [2024-07-14 20:16:24.484733] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:35.565 [2024-07-14 20:16:24.484758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:35.565 passed 00:16:35.565 Test: blockdev nvme admin passthru ...passed 00:16:35.565 Test: blockdev copy ...passed 00:16:35.565 00:16:35.565 Run Summary: Type Total Ran Passed Failed Inactive 00:16:35.565 suites 1 1 n/a 0 0 00:16:35.565 tests 23 23 23 0 0 00:16:35.565 asserts 152 152 152 0 n/a 00:16:35.565 00:16:35.565 Elapsed time = 0.885 seconds 00:16:35.822 20:16:24 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:35.822 20:16:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.823 20:16:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:35.823 20:16:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.823 20:16:24 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:35.823 20:16:24 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:16:35.823 20:16:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:35.823 20:16:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:16:35.823 20:16:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:35.823 20:16:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:16:35.823 20:16:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:35.823 20:16:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:35.823 rmmod nvme_tcp 00:16:35.823 rmmod nvme_fabrics 00:16:35.823 rmmod nvme_keyring 00:16:36.081 20:16:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:36.081 20:16:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:16:36.081 20:16:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:16:36.081 20:16:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 93398 ']' 00:16:36.081 20:16:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 93398 00:16:36.081 20:16:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@946 -- # '[' -z 93398 ']' 00:16:36.081 20:16:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@950 -- # kill -0 93398 00:16:36.081 20:16:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # uname 00:16:36.081 20:16:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:36.081 20:16:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 93398 00:16:36.081 20:16:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:16:36.081 20:16:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:16:36.081 killing process with pid 93398 00:16:36.081 20:16:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@964 -- # echo 'killing process with pid 93398' 00:16:36.081 20:16:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@965 -- # kill 93398 00:16:36.081 20:16:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@970 -- # wait 93398 00:16:36.340 20:16:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:36.340 20:16:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:36.340 20:16:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:36.340 20:16:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:36.340 20:16:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:36.340 20:16:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:36.340 20:16:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:36.340 20:16:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:36.340 20:16:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:36.340 00:16:36.340 real 0m3.493s 00:16:36.340 user 0m12.839s 00:16:36.340 sys 0m0.978s 00:16:36.340 20:16:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:36.340 20:16:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:36.340 ************************************ 00:16:36.340 END TEST nvmf_bdevio 00:16:36.340 ************************************ 00:16:36.340 20:16:25 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:36.340 20:16:25 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:36.340 20:16:25 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:36.340 20:16:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:36.340 ************************************ 00:16:36.340 START TEST nvmf_auth_target 00:16:36.340 ************************************ 00:16:36.340 20:16:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:36.599 * Looking for test storage... 00:16:36.599 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:36.599 20:16:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:36.599 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:16:36.599 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:36.599 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:36.599 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:36.599 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:36.599 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:36.599 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:36.599 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:36.599 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:36.599 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:36.599 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:36.599 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:16:36.599 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:16:36.599 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:36.599 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:36.599 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:36.599 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:36.599 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:36.599 20:16:25 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:36.599 20:16:25 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:36.599 20:16:25 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:36.599 20:16:25 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.599 20:16:25 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.599 20:16:25 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.599 20:16:25 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:16:36.599 20:16:25 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.599 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:16:36.599 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:36.599 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:36.599 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:36.599 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:36.599 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:36.599 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:36.599 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:36.599 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:36.599 20:16:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:36.599 20:16:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:36.599 20:16:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:16:36.599 20:16:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:16:36.599 20:16:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:16:36.599 20:16:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:16:36.599 20:16:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:16:36.599 20:16:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:16:36.599 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:36.599 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:36.599 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:36.599 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:36.599 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:36.599 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:36.599 20:16:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:36.599 20:16:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:36.599 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:36.599 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:36.599 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:36.599 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:36.599 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:36.599 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:36.599 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:36.599 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:36.599 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:36.599 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:36.599 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:36.599 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:36.599 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:36.599 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:36.599 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:36.599 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:36.599 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:36.599 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:36.599 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:36.600 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:36.600 Cannot find device "nvmf_tgt_br" 00:16:36.600 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # true 00:16:36.600 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:36.600 Cannot find device "nvmf_tgt_br2" 00:16:36.600 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # true 00:16:36.600 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:36.600 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:36.600 Cannot find device "nvmf_tgt_br" 00:16:36.600 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # true 00:16:36.600 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:36.600 Cannot find device "nvmf_tgt_br2" 00:16:36.600 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # true 00:16:36.600 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:36.600 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:36.600 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:36.600 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:36.600 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:16:36.600 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:36.600 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:36.600 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:16:36.600 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:36.600 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:36.600 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:36.858 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:36.858 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:36.858 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:36.858 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:36.858 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:36.858 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:36.858 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:36.858 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:36.858 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:36.858 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:36.858 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:36.858 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:36.858 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:36.858 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:36.858 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:36.858 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:36.858 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:36.858 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:36.858 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:36.858 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:36.858 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:36.858 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:36.858 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.093 ms 00:16:36.858 00:16:36.858 --- 10.0.0.2 ping statistics --- 00:16:36.858 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:36.858 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:16:36.858 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:36.858 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:36.858 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.086 ms 00:16:36.858 00:16:36.858 --- 10.0.0.3 ping statistics --- 00:16:36.858 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:36.858 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:16:36.858 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:36.858 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:36.858 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:16:36.858 00:16:36.858 --- 10.0.0.1 ping statistics --- 00:16:36.858 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:36.858 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:16:36.858 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:36.858 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@433 -- # return 0 00:16:36.858 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:36.858 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:36.858 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:36.858 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:36.858 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:36.858 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:36.858 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:36.858 20:16:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:16:36.858 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:36.858 20:16:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:36.858 20:16:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.858 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=93632 00:16:36.858 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:16:36.858 20:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 93632 00:16:36.858 20:16:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 93632 ']' 00:16:36.858 20:16:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:36.858 20:16:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:36.858 20:16:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:36.858 20:16:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:36.858 20:16:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.233 20:16:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:38.233 20:16:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:16:38.233 20:16:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:38.233 20:16:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:38.233 20:16:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.233 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:38.233 20:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=93676 00:16:38.233 20:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:38.233 20:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:16:38.233 20:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:16:38.233 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:38.233 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:38.233 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:38.233 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:16:38.233 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:16:38.233 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:38.233 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=c66f2a6bfd9be19a9406f5de266fc55fa893c11f502f64ea 00:16:38.233 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:16:38.233 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.u44 00:16:38.233 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key c66f2a6bfd9be19a9406f5de266fc55fa893c11f502f64ea 0 00:16:38.233 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 c66f2a6bfd9be19a9406f5de266fc55fa893c11f502f64ea 0 00:16:38.233 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:38.233 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:38.233 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=c66f2a6bfd9be19a9406f5de266fc55fa893c11f502f64ea 00:16:38.233 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:16:38.233 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:38.233 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.u44 00:16:38.233 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.u44 00:16:38.233 20:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.u44 00:16:38.233 20:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:16:38.233 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:38.233 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:38.233 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:38.233 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:16:38.233 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:16:38.233 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:38.233 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=172fee0a623718fa2b352b2dc8698f9b810359db59696a373a93c41c37cc156b 00:16:38.233 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:16:38.233 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.zlK 00:16:38.233 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 172fee0a623718fa2b352b2dc8698f9b810359db59696a373a93c41c37cc156b 3 00:16:38.233 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 172fee0a623718fa2b352b2dc8698f9b810359db59696a373a93c41c37cc156b 3 00:16:38.233 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:38.233 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:38.233 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=172fee0a623718fa2b352b2dc8698f9b810359db59696a373a93c41c37cc156b 00:16:38.233 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:16:38.233 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:38.233 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.zlK 00:16:38.233 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.zlK 00:16:38.233 20:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.zlK 00:16:38.233 20:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:16:38.233 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:38.233 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:38.233 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:38.233 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:16:38.233 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:16:38.233 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:38.233 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=b203808035cfa3e6e3417daf1439eb81 00:16:38.233 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:16:38.233 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.S8U 00:16:38.233 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key b203808035cfa3e6e3417daf1439eb81 1 00:16:38.233 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 b203808035cfa3e6e3417daf1439eb81 1 00:16:38.233 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:38.233 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:38.233 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=b203808035cfa3e6e3417daf1439eb81 00:16:38.233 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:16:38.233 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:38.233 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.S8U 00:16:38.233 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.S8U 00:16:38.233 20:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.S8U 00:16:38.233 20:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:16:38.233 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:38.233 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:38.233 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:38.233 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:16:38.233 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:16:38.233 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:38.233 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=b0e4657831f0ceb8fa5ae9ad600396a0859352959cda7462 00:16:38.233 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:16:38.233 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.bE2 00:16:38.233 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key b0e4657831f0ceb8fa5ae9ad600396a0859352959cda7462 2 00:16:38.233 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 b0e4657831f0ceb8fa5ae9ad600396a0859352959cda7462 2 00:16:38.233 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:38.233 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:38.233 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=b0e4657831f0ceb8fa5ae9ad600396a0859352959cda7462 00:16:38.233 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:16:38.233 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:38.233 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.bE2 00:16:38.233 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.bE2 00:16:38.233 20:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.bE2 00:16:38.233 20:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:16:38.233 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:38.233 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:38.233 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:38.233 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:16:38.233 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:16:38.233 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:38.233 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=9c38b8448f675f69d9fb7fdd242a4314737bddad0a7825e5 00:16:38.233 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:16:38.233 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.6HL 00:16:38.233 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 9c38b8448f675f69d9fb7fdd242a4314737bddad0a7825e5 2 00:16:38.233 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 9c38b8448f675f69d9fb7fdd242a4314737bddad0a7825e5 2 00:16:38.233 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:38.233 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:38.233 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=9c38b8448f675f69d9fb7fdd242a4314737bddad0a7825e5 00:16:38.233 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:16:38.233 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:38.493 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.6HL 00:16:38.493 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.6HL 00:16:38.493 20:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.6HL 00:16:38.493 20:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:16:38.493 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:38.493 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:38.493 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:38.493 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:16:38.493 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:16:38.493 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:38.493 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=1a1eb31a18cdcbbf39ed57d271862d03 00:16:38.493 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:16:38.493 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.6Nm 00:16:38.493 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 1a1eb31a18cdcbbf39ed57d271862d03 1 00:16:38.493 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 1a1eb31a18cdcbbf39ed57d271862d03 1 00:16:38.493 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:38.493 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:38.493 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=1a1eb31a18cdcbbf39ed57d271862d03 00:16:38.493 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:16:38.493 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:38.493 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.6Nm 00:16:38.493 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.6Nm 00:16:38.493 20:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.6Nm 00:16:38.493 20:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:16:38.493 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:38.493 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:38.493 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:38.493 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:16:38.493 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:16:38.493 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:38.493 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=0f63270233dd75e7b9ad75caf1a9a5663bcc06592a65142beb26916df531d6b5 00:16:38.493 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:16:38.493 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.9um 00:16:38.493 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 0f63270233dd75e7b9ad75caf1a9a5663bcc06592a65142beb26916df531d6b5 3 00:16:38.493 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 0f63270233dd75e7b9ad75caf1a9a5663bcc06592a65142beb26916df531d6b5 3 00:16:38.493 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:38.493 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:38.493 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=0f63270233dd75e7b9ad75caf1a9a5663bcc06592a65142beb26916df531d6b5 00:16:38.493 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:16:38.493 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:38.493 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.9um 00:16:38.493 20:16:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.9um 00:16:38.493 20:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.9um 00:16:38.493 20:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:16:38.493 20:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 93632 00:16:38.493 20:16:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 93632 ']' 00:16:38.493 20:16:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:38.493 20:16:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:38.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:38.493 20:16:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:38.493 20:16:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:38.493 20:16:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.751 20:16:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:38.751 20:16:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:16:38.751 20:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 93676 /var/tmp/host.sock 00:16:38.751 20:16:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 93676 ']' 00:16:38.751 20:16:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/host.sock 00:16:38.751 20:16:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:38.751 20:16:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:38.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:38.751 20:16:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:38.751 20:16:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.010 20:16:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:39.010 20:16:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:16:39.010 20:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:16:39.010 20:16:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.010 20:16:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.268 20:16:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.268 20:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:39.268 20:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.u44 00:16:39.268 20:16:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.268 20:16:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.268 20:16:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.268 20:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.u44 00:16:39.268 20:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.u44 00:16:39.527 20:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.zlK ]] 00:16:39.527 20:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.zlK 00:16:39.527 20:16:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.527 20:16:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.527 20:16:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.527 20:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.zlK 00:16:39.527 20:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.zlK 00:16:39.785 20:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:39.785 20:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.S8U 00:16:39.785 20:16:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.785 20:16:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.785 20:16:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.785 20:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.S8U 00:16:39.785 20:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.S8U 00:16:39.785 20:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.bE2 ]] 00:16:39.785 20:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.bE2 00:16:39.785 20:16:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.785 20:16:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.044 20:16:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.044 20:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.bE2 00:16:40.044 20:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.bE2 00:16:40.302 20:16:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:40.302 20:16:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.6HL 00:16:40.302 20:16:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.302 20:16:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.302 20:16:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.302 20:16:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.6HL 00:16:40.302 20:16:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.6HL 00:16:40.560 20:16:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.6Nm ]] 00:16:40.560 20:16:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.6Nm 00:16:40.560 20:16:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.561 20:16:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.561 20:16:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.561 20:16:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.6Nm 00:16:40.561 20:16:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.6Nm 00:16:40.818 20:16:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:40.818 20:16:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.9um 00:16:40.818 20:16:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.818 20:16:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.818 20:16:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.818 20:16:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.9um 00:16:40.818 20:16:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.9um 00:16:41.076 20:16:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:16:41.076 20:16:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:16:41.076 20:16:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:41.076 20:16:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:41.076 20:16:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:41.076 20:16:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:41.378 20:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:16:41.378 20:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:41.378 20:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:41.378 20:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:41.378 20:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:41.378 20:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.378 20:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.378 20:16:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.378 20:16:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.378 20:16:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.378 20:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.378 20:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.661 00:16:41.661 20:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:41.661 20:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:41.661 20:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:41.920 20:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.920 20:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:41.920 20:16:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.920 20:16:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.920 20:16:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.920 20:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:41.920 { 00:16:41.920 "auth": { 00:16:41.920 "dhgroup": "null", 00:16:41.920 "digest": "sha256", 00:16:41.920 "state": "completed" 00:16:41.920 }, 00:16:41.920 "cntlid": 1, 00:16:41.920 "listen_address": { 00:16:41.920 "adrfam": "IPv4", 00:16:41.920 "traddr": "10.0.0.2", 00:16:41.920 "trsvcid": "4420", 00:16:41.920 "trtype": "TCP" 00:16:41.920 }, 00:16:41.920 "peer_address": { 00:16:41.920 "adrfam": "IPv4", 00:16:41.920 "traddr": "10.0.0.1", 00:16:41.920 "trsvcid": "43984", 00:16:41.920 "trtype": "TCP" 00:16:41.920 }, 00:16:41.920 "qid": 0, 00:16:41.920 "state": "enabled" 00:16:41.920 } 00:16:41.920 ]' 00:16:41.920 20:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:41.920 20:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:41.920 20:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:41.920 20:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:41.920 20:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:41.920 20:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:41.920 20:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:41.920 20:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.178 20:16:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-secret DHHC-1:00:YzY2ZjJhNmJmZDliZTE5YTk0MDZmNWRlMjY2ZmM1NWZhODkzYzExZjUwMmY2NGVhRC7lxQ==: --dhchap-ctrl-secret DHHC-1:03:MTcyZmVlMGE2MjM3MThmYTJiMzUyYjJkYzg2OThmOWI4MTAzNTlkYjU5Njk2YTM3M2E5M2M0MWMzN2NjMTU2YnO3Ax0=: 00:16:46.364 20:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.364 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.364 20:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:16:46.364 20:16:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.364 20:16:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.364 20:16:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.364 20:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:46.364 20:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:46.364 20:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:46.364 20:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:16:46.364 20:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:46.364 20:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:46.364 20:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:46.364 20:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:46.364 20:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.364 20:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:46.364 20:16:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.364 20:16:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.364 20:16:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.364 20:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:46.364 20:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:46.622 00:16:46.622 20:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:46.622 20:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:46.622 20:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:46.881 20:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.881 20:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:46.881 20:16:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.881 20:16:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.881 20:16:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.881 20:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:46.881 { 00:16:46.881 "auth": { 00:16:46.881 "dhgroup": "null", 00:16:46.881 "digest": "sha256", 00:16:46.881 "state": "completed" 00:16:46.881 }, 00:16:46.881 "cntlid": 3, 00:16:46.881 "listen_address": { 00:16:46.881 "adrfam": "IPv4", 00:16:46.881 "traddr": "10.0.0.2", 00:16:46.881 "trsvcid": "4420", 00:16:46.881 "trtype": "TCP" 00:16:46.881 }, 00:16:46.881 "peer_address": { 00:16:46.881 "adrfam": "IPv4", 00:16:46.881 "traddr": "10.0.0.1", 00:16:46.881 "trsvcid": "44002", 00:16:46.881 "trtype": "TCP" 00:16:46.881 }, 00:16:46.881 "qid": 0, 00:16:46.881 "state": "enabled" 00:16:46.881 } 00:16:46.881 ]' 00:16:46.881 20:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:46.881 20:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:46.881 20:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:47.139 20:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:47.139 20:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:47.139 20:16:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.139 20:16:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.139 20:16:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:47.396 20:16:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-secret DHHC-1:01:YjIwMzgwODAzNWNmYTNlNmUzNDE3ZGFmMTQzOWViODGfNR8X: --dhchap-ctrl-secret DHHC-1:02:YjBlNDY1NzgzMWYwY2ViOGZhNWFlOWFkNjAwMzk2YTA4NTkzNTI5NTljZGE3NDYyV+vFWA==: 00:16:47.963 20:16:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:47.963 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:47.963 20:16:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:16:47.963 20:16:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.963 20:16:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.963 20:16:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.963 20:16:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:47.963 20:16:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:47.963 20:16:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:48.222 20:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:16:48.222 20:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:48.222 20:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:48.222 20:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:48.222 20:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:48.222 20:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:48.222 20:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:48.222 20:16:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.222 20:16:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.222 20:16:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.222 20:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:48.222 20:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:48.788 00:16:48.788 20:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:48.788 20:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:48.788 20:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:48.788 20:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.788 20:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:48.788 20:16:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.788 20:16:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.788 20:16:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.788 20:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:48.788 { 00:16:48.788 "auth": { 00:16:48.788 "dhgroup": "null", 00:16:48.788 "digest": "sha256", 00:16:48.788 "state": "completed" 00:16:48.788 }, 00:16:48.788 "cntlid": 5, 00:16:48.788 "listen_address": { 00:16:48.788 "adrfam": "IPv4", 00:16:48.788 "traddr": "10.0.0.2", 00:16:48.788 "trsvcid": "4420", 00:16:48.788 "trtype": "TCP" 00:16:48.788 }, 00:16:48.788 "peer_address": { 00:16:48.788 "adrfam": "IPv4", 00:16:48.789 "traddr": "10.0.0.1", 00:16:48.789 "trsvcid": "58388", 00:16:48.789 "trtype": "TCP" 00:16:48.789 }, 00:16:48.789 "qid": 0, 00:16:48.789 "state": "enabled" 00:16:48.789 } 00:16:48.789 ]' 00:16:48.789 20:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:49.047 20:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:49.047 20:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:49.047 20:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:49.047 20:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:49.047 20:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.047 20:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.047 20:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.305 20:16:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-secret DHHC-1:02:OWMzOGI4NDQ4ZjY3NWY2OWQ5ZmI3ZmRkMjQyYTQzMTQ3MzdiZGRhZDBhNzgyNWU1YWhG9g==: --dhchap-ctrl-secret DHHC-1:01:MWExZWIzMWExOGNkY2JiZjM5ZWQ1N2QyNzE4NjJkMDOQlR/2: 00:16:49.873 20:16:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.873 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.873 20:16:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:16:49.873 20:16:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.873 20:16:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.873 20:16:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.873 20:16:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:49.873 20:16:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:49.873 20:16:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:50.132 20:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:16:50.132 20:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:50.132 20:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:50.132 20:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:50.132 20:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:50.132 20:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.132 20:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-key key3 00:16:50.132 20:16:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.132 20:16:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.132 20:16:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.132 20:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:50.132 20:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:50.391 00:16:50.649 20:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:50.649 20:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:50.649 20:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:50.908 20:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.908 20:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:50.908 20:16:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.908 20:16:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.908 20:16:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.908 20:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:50.908 { 00:16:50.908 "auth": { 00:16:50.908 "dhgroup": "null", 00:16:50.908 "digest": "sha256", 00:16:50.908 "state": "completed" 00:16:50.908 }, 00:16:50.908 "cntlid": 7, 00:16:50.908 "listen_address": { 00:16:50.908 "adrfam": "IPv4", 00:16:50.908 "traddr": "10.0.0.2", 00:16:50.908 "trsvcid": "4420", 00:16:50.908 "trtype": "TCP" 00:16:50.908 }, 00:16:50.908 "peer_address": { 00:16:50.908 "adrfam": "IPv4", 00:16:50.908 "traddr": "10.0.0.1", 00:16:50.908 "trsvcid": "58416", 00:16:50.908 "trtype": "TCP" 00:16:50.908 }, 00:16:50.908 "qid": 0, 00:16:50.908 "state": "enabled" 00:16:50.908 } 00:16:50.908 ]' 00:16:50.908 20:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:50.908 20:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:50.908 20:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:50.908 20:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:50.908 20:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:50.908 20:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:50.908 20:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:50.908 20:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.167 20:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-secret DHHC-1:03:MGY2MzI3MDIzM2RkNzVlN2I5YWQ3NWNhZjFhOWE1NjYzYmNjMDY1OTJhNjUxNDJiZWIyNjkxNmRmNTMxZDZiNSX1tGc=: 00:16:52.103 20:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:52.103 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:52.103 20:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:16:52.103 20:16:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.103 20:16:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.103 20:16:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.103 20:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:52.103 20:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:52.103 20:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:52.103 20:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:52.103 20:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:16:52.103 20:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:52.103 20:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:52.103 20:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:52.103 20:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:52.103 20:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.103 20:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:52.103 20:16:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.103 20:16:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.362 20:16:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.362 20:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:52.362 20:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:52.620 00:16:52.620 20:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:52.620 20:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:52.620 20:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.620 20:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.620 20:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.620 20:16:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.620 20:16:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.879 20:16:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.879 20:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:52.879 { 00:16:52.879 "auth": { 00:16:52.879 "dhgroup": "ffdhe2048", 00:16:52.879 "digest": "sha256", 00:16:52.879 "state": "completed" 00:16:52.879 }, 00:16:52.879 "cntlid": 9, 00:16:52.879 "listen_address": { 00:16:52.879 "adrfam": "IPv4", 00:16:52.879 "traddr": "10.0.0.2", 00:16:52.879 "trsvcid": "4420", 00:16:52.879 "trtype": "TCP" 00:16:52.879 }, 00:16:52.879 "peer_address": { 00:16:52.879 "adrfam": "IPv4", 00:16:52.879 "traddr": "10.0.0.1", 00:16:52.879 "trsvcid": "58438", 00:16:52.879 "trtype": "TCP" 00:16:52.879 }, 00:16:52.879 "qid": 0, 00:16:52.879 "state": "enabled" 00:16:52.879 } 00:16:52.879 ]' 00:16:52.879 20:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:52.879 20:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:52.879 20:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:52.879 20:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:52.879 20:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:52.879 20:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.879 20:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.879 20:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.137 20:16:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-secret DHHC-1:00:YzY2ZjJhNmJmZDliZTE5YTk0MDZmNWRlMjY2ZmM1NWZhODkzYzExZjUwMmY2NGVhRC7lxQ==: --dhchap-ctrl-secret DHHC-1:03:MTcyZmVlMGE2MjM3MThmYTJiMzUyYjJkYzg2OThmOWI4MTAzNTlkYjU5Njk2YTM3M2E5M2M0MWMzN2NjMTU2YnO3Ax0=: 00:16:53.705 20:16:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:53.705 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:53.705 20:16:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:16:53.705 20:16:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.705 20:16:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.705 20:16:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.705 20:16:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:53.705 20:16:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:53.705 20:16:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:53.964 20:16:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:16:53.964 20:16:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:53.964 20:16:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:53.964 20:16:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:53.964 20:16:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:53.964 20:16:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:53.964 20:16:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:53.964 20:16:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.964 20:16:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.964 20:16:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.964 20:16:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:53.964 20:16:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:54.541 00:16:54.541 20:16:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:54.541 20:16:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.541 20:16:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:54.541 20:16:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.541 20:16:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:54.541 20:16:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.541 20:16:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.799 20:16:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.799 20:16:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:54.799 { 00:16:54.799 "auth": { 00:16:54.799 "dhgroup": "ffdhe2048", 00:16:54.799 "digest": "sha256", 00:16:54.799 "state": "completed" 00:16:54.799 }, 00:16:54.799 "cntlid": 11, 00:16:54.799 "listen_address": { 00:16:54.799 "adrfam": "IPv4", 00:16:54.799 "traddr": "10.0.0.2", 00:16:54.799 "trsvcid": "4420", 00:16:54.799 "trtype": "TCP" 00:16:54.799 }, 00:16:54.799 "peer_address": { 00:16:54.799 "adrfam": "IPv4", 00:16:54.799 "traddr": "10.0.0.1", 00:16:54.799 "trsvcid": "58460", 00:16:54.799 "trtype": "TCP" 00:16:54.799 }, 00:16:54.799 "qid": 0, 00:16:54.799 "state": "enabled" 00:16:54.799 } 00:16:54.799 ]' 00:16:54.799 20:16:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:54.799 20:16:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:54.799 20:16:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:54.799 20:16:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:54.799 20:16:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:54.799 20:16:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.799 20:16:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.799 20:16:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:55.058 20:16:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-secret DHHC-1:01:YjIwMzgwODAzNWNmYTNlNmUzNDE3ZGFmMTQzOWViODGfNR8X: --dhchap-ctrl-secret DHHC-1:02:YjBlNDY1NzgzMWYwY2ViOGZhNWFlOWFkNjAwMzk2YTA4NTkzNTI5NTljZGE3NDYyV+vFWA==: 00:16:55.625 20:16:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.625 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.625 20:16:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:16:55.625 20:16:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.625 20:16:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.625 20:16:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.625 20:16:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:55.625 20:16:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:55.625 20:16:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:55.883 20:16:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:16:55.883 20:16:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:55.883 20:16:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:55.883 20:16:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:55.883 20:16:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:55.883 20:16:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:55.883 20:16:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:55.883 20:16:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.883 20:16:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.883 20:16:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.883 20:16:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:55.883 20:16:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.449 00:16:56.449 20:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:56.449 20:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:56.449 20:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.708 20:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.708 20:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:56.708 20:16:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.708 20:16:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.708 20:16:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.708 20:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:56.708 { 00:16:56.708 "auth": { 00:16:56.708 "dhgroup": "ffdhe2048", 00:16:56.708 "digest": "sha256", 00:16:56.708 "state": "completed" 00:16:56.708 }, 00:16:56.708 "cntlid": 13, 00:16:56.708 "listen_address": { 00:16:56.708 "adrfam": "IPv4", 00:16:56.708 "traddr": "10.0.0.2", 00:16:56.708 "trsvcid": "4420", 00:16:56.708 "trtype": "TCP" 00:16:56.708 }, 00:16:56.708 "peer_address": { 00:16:56.708 "adrfam": "IPv4", 00:16:56.708 "traddr": "10.0.0.1", 00:16:56.708 "trsvcid": "58490", 00:16:56.708 "trtype": "TCP" 00:16:56.708 }, 00:16:56.708 "qid": 0, 00:16:56.708 "state": "enabled" 00:16:56.708 } 00:16:56.708 ]' 00:16:56.708 20:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:56.708 20:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:56.708 20:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:56.708 20:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:56.708 20:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:56.708 20:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:56.708 20:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:56.708 20:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:56.966 20:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-secret DHHC-1:02:OWMzOGI4NDQ4ZjY3NWY2OWQ5ZmI3ZmRkMjQyYTQzMTQ3MzdiZGRhZDBhNzgyNWU1YWhG9g==: --dhchap-ctrl-secret DHHC-1:01:MWExZWIzMWExOGNkY2JiZjM5ZWQ1N2QyNzE4NjJkMDOQlR/2: 00:16:57.904 20:16:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:57.904 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:57.904 20:16:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:16:57.904 20:16:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.904 20:16:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.904 20:16:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.904 20:16:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:57.904 20:16:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:57.904 20:16:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:57.904 20:16:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:16:57.904 20:16:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:57.904 20:16:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:57.904 20:16:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:57.904 20:16:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:57.904 20:16:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:57.904 20:16:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-key key3 00:16:57.904 20:16:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.904 20:16:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.904 20:16:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.904 20:16:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:57.904 20:16:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:58.163 00:16:58.421 20:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:58.421 20:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:58.421 20:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.680 20:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.680 20:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:58.680 20:16:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.680 20:16:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.680 20:16:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.680 20:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:58.680 { 00:16:58.680 "auth": { 00:16:58.681 "dhgroup": "ffdhe2048", 00:16:58.681 "digest": "sha256", 00:16:58.681 "state": "completed" 00:16:58.681 }, 00:16:58.681 "cntlid": 15, 00:16:58.681 "listen_address": { 00:16:58.681 "adrfam": "IPv4", 00:16:58.681 "traddr": "10.0.0.2", 00:16:58.681 "trsvcid": "4420", 00:16:58.681 "trtype": "TCP" 00:16:58.681 }, 00:16:58.681 "peer_address": { 00:16:58.681 "adrfam": "IPv4", 00:16:58.681 "traddr": "10.0.0.1", 00:16:58.681 "trsvcid": "58548", 00:16:58.681 "trtype": "TCP" 00:16:58.681 }, 00:16:58.681 "qid": 0, 00:16:58.681 "state": "enabled" 00:16:58.681 } 00:16:58.681 ]' 00:16:58.681 20:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:58.681 20:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:58.681 20:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:58.681 20:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:58.681 20:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:58.681 20:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.681 20:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.681 20:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.939 20:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-secret DHHC-1:03:MGY2MzI3MDIzM2RkNzVlN2I5YWQ3NWNhZjFhOWE1NjYzYmNjMDY1OTJhNjUxNDJiZWIyNjkxNmRmNTMxZDZiNSX1tGc=: 00:16:59.505 20:16:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:59.505 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:59.505 20:16:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:16:59.505 20:16:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.505 20:16:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.505 20:16:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.505 20:16:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:59.505 20:16:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:59.505 20:16:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:59.505 20:16:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:59.764 20:16:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:16:59.764 20:16:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:59.764 20:16:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:59.764 20:16:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:59.764 20:16:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:59.764 20:16:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:59.764 20:16:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:59.764 20:16:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.764 20:16:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.764 20:16:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.764 20:16:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:59.764 20:16:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:00.022 00:17:00.022 20:16:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:00.022 20:16:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.022 20:16:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:00.588 20:16:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.588 20:16:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.588 20:16:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.588 20:16:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.588 20:16:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.588 20:16:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:00.588 { 00:17:00.588 "auth": { 00:17:00.588 "dhgroup": "ffdhe3072", 00:17:00.588 "digest": "sha256", 00:17:00.588 "state": "completed" 00:17:00.588 }, 00:17:00.588 "cntlid": 17, 00:17:00.588 "listen_address": { 00:17:00.588 "adrfam": "IPv4", 00:17:00.588 "traddr": "10.0.0.2", 00:17:00.588 "trsvcid": "4420", 00:17:00.588 "trtype": "TCP" 00:17:00.588 }, 00:17:00.589 "peer_address": { 00:17:00.589 "adrfam": "IPv4", 00:17:00.589 "traddr": "10.0.0.1", 00:17:00.589 "trsvcid": "58572", 00:17:00.589 "trtype": "TCP" 00:17:00.589 }, 00:17:00.589 "qid": 0, 00:17:00.589 "state": "enabled" 00:17:00.589 } 00:17:00.589 ]' 00:17:00.589 20:16:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:00.589 20:16:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:00.589 20:16:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:00.589 20:16:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:00.589 20:16:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:00.589 20:16:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:00.589 20:16:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.589 20:16:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:00.847 20:16:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-secret DHHC-1:00:YzY2ZjJhNmJmZDliZTE5YTk0MDZmNWRlMjY2ZmM1NWZhODkzYzExZjUwMmY2NGVhRC7lxQ==: --dhchap-ctrl-secret DHHC-1:03:MTcyZmVlMGE2MjM3MThmYTJiMzUyYjJkYzg2OThmOWI4MTAzNTlkYjU5Njk2YTM3M2E5M2M0MWMzN2NjMTU2YnO3Ax0=: 00:17:01.415 20:16:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:01.415 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:01.415 20:16:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:17:01.415 20:16:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.415 20:16:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.415 20:16:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.415 20:16:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:01.415 20:16:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:01.415 20:16:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:01.674 20:16:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:17:01.674 20:16:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:01.674 20:16:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:01.674 20:16:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:01.674 20:16:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:01.674 20:16:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:01.674 20:16:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:01.674 20:16:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.674 20:16:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.674 20:16:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.674 20:16:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:01.674 20:16:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:01.932 00:17:01.932 20:16:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:01.932 20:16:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:01.932 20:16:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.191 20:16:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.191 20:16:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:02.191 20:16:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.191 20:16:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.191 20:16:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.191 20:16:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:02.191 { 00:17:02.191 "auth": { 00:17:02.191 "dhgroup": "ffdhe3072", 00:17:02.191 "digest": "sha256", 00:17:02.191 "state": "completed" 00:17:02.191 }, 00:17:02.191 "cntlid": 19, 00:17:02.191 "listen_address": { 00:17:02.191 "adrfam": "IPv4", 00:17:02.191 "traddr": "10.0.0.2", 00:17:02.191 "trsvcid": "4420", 00:17:02.191 "trtype": "TCP" 00:17:02.191 }, 00:17:02.191 "peer_address": { 00:17:02.191 "adrfam": "IPv4", 00:17:02.191 "traddr": "10.0.0.1", 00:17:02.191 "trsvcid": "58588", 00:17:02.191 "trtype": "TCP" 00:17:02.191 }, 00:17:02.191 "qid": 0, 00:17:02.191 "state": "enabled" 00:17:02.191 } 00:17:02.191 ]' 00:17:02.191 20:16:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:02.450 20:16:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:02.450 20:16:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:02.450 20:16:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:02.450 20:16:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:02.450 20:16:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:02.450 20:16:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:02.450 20:16:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:02.709 20:16:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-secret DHHC-1:01:YjIwMzgwODAzNWNmYTNlNmUzNDE3ZGFmMTQzOWViODGfNR8X: --dhchap-ctrl-secret DHHC-1:02:YjBlNDY1NzgzMWYwY2ViOGZhNWFlOWFkNjAwMzk2YTA4NTkzNTI5NTljZGE3NDYyV+vFWA==: 00:17:03.277 20:16:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:03.277 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:03.277 20:16:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:17:03.277 20:16:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.277 20:16:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.277 20:16:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.277 20:16:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:03.277 20:16:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:03.277 20:16:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:03.535 20:16:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:17:03.535 20:16:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:03.536 20:16:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:03.536 20:16:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:03.536 20:16:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:03.536 20:16:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:03.536 20:16:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.536 20:16:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.536 20:16:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.536 20:16:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.536 20:16:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.536 20:16:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.794 00:17:03.794 20:16:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:03.794 20:16:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:03.794 20:16:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:04.052 20:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.052 20:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.052 20:16:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.052 20:16:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.052 20:16:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.052 20:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:04.052 { 00:17:04.052 "auth": { 00:17:04.052 "dhgroup": "ffdhe3072", 00:17:04.052 "digest": "sha256", 00:17:04.052 "state": "completed" 00:17:04.052 }, 00:17:04.052 "cntlid": 21, 00:17:04.052 "listen_address": { 00:17:04.052 "adrfam": "IPv4", 00:17:04.052 "traddr": "10.0.0.2", 00:17:04.052 "trsvcid": "4420", 00:17:04.052 "trtype": "TCP" 00:17:04.052 }, 00:17:04.052 "peer_address": { 00:17:04.052 "adrfam": "IPv4", 00:17:04.052 "traddr": "10.0.0.1", 00:17:04.052 "trsvcid": "58610", 00:17:04.052 "trtype": "TCP" 00:17:04.052 }, 00:17:04.052 "qid": 0, 00:17:04.052 "state": "enabled" 00:17:04.052 } 00:17:04.052 ]' 00:17:04.052 20:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:04.312 20:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:04.312 20:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:04.312 20:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:04.312 20:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:04.312 20:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:04.312 20:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:04.312 20:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:04.570 20:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-secret DHHC-1:02:OWMzOGI4NDQ4ZjY3NWY2OWQ5ZmI3ZmRkMjQyYTQzMTQ3MzdiZGRhZDBhNzgyNWU1YWhG9g==: --dhchap-ctrl-secret DHHC-1:01:MWExZWIzMWExOGNkY2JiZjM5ZWQ1N2QyNzE4NjJkMDOQlR/2: 00:17:05.144 20:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:05.144 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:05.144 20:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:17:05.144 20:16:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.144 20:16:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.144 20:16:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.144 20:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:05.144 20:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:05.144 20:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:05.427 20:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:17:05.427 20:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:05.427 20:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:05.427 20:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:05.427 20:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:05.427 20:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:05.427 20:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-key key3 00:17:05.427 20:16:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.427 20:16:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.427 20:16:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.427 20:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:05.427 20:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:06.005 00:17:06.005 20:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:06.005 20:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:06.005 20:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.005 20:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.264 20:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:06.264 20:16:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.264 20:16:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.264 20:16:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.264 20:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:06.264 { 00:17:06.264 "auth": { 00:17:06.264 "dhgroup": "ffdhe3072", 00:17:06.264 "digest": "sha256", 00:17:06.264 "state": "completed" 00:17:06.264 }, 00:17:06.264 "cntlid": 23, 00:17:06.264 "listen_address": { 00:17:06.264 "adrfam": "IPv4", 00:17:06.264 "traddr": "10.0.0.2", 00:17:06.264 "trsvcid": "4420", 00:17:06.264 "trtype": "TCP" 00:17:06.264 }, 00:17:06.264 "peer_address": { 00:17:06.264 "adrfam": "IPv4", 00:17:06.264 "traddr": "10.0.0.1", 00:17:06.264 "trsvcid": "58642", 00:17:06.264 "trtype": "TCP" 00:17:06.264 }, 00:17:06.264 "qid": 0, 00:17:06.264 "state": "enabled" 00:17:06.264 } 00:17:06.264 ]' 00:17:06.264 20:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:06.264 20:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:06.264 20:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:06.264 20:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:06.264 20:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:06.264 20:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:06.264 20:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.264 20:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:06.522 20:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-secret DHHC-1:03:MGY2MzI3MDIzM2RkNzVlN2I5YWQ3NWNhZjFhOWE1NjYzYmNjMDY1OTJhNjUxNDJiZWIyNjkxNmRmNTMxZDZiNSX1tGc=: 00:17:07.088 20:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:07.088 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:07.088 20:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:17:07.088 20:16:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.088 20:16:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.088 20:16:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.088 20:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:07.088 20:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:07.088 20:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:07.088 20:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:07.346 20:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:17:07.346 20:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:07.346 20:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:07.346 20:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:07.346 20:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:07.346 20:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:07.346 20:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.346 20:16:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.346 20:16:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.346 20:16:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.346 20:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.346 20:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.912 00:17:07.912 20:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:07.912 20:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:07.912 20:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.169 20:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.169 20:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:08.169 20:16:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.169 20:16:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.169 20:16:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.169 20:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:08.169 { 00:17:08.169 "auth": { 00:17:08.169 "dhgroup": "ffdhe4096", 00:17:08.169 "digest": "sha256", 00:17:08.169 "state": "completed" 00:17:08.169 }, 00:17:08.169 "cntlid": 25, 00:17:08.169 "listen_address": { 00:17:08.169 "adrfam": "IPv4", 00:17:08.169 "traddr": "10.0.0.2", 00:17:08.169 "trsvcid": "4420", 00:17:08.169 "trtype": "TCP" 00:17:08.169 }, 00:17:08.169 "peer_address": { 00:17:08.169 "adrfam": "IPv4", 00:17:08.169 "traddr": "10.0.0.1", 00:17:08.169 "trsvcid": "58604", 00:17:08.169 "trtype": "TCP" 00:17:08.169 }, 00:17:08.169 "qid": 0, 00:17:08.169 "state": "enabled" 00:17:08.169 } 00:17:08.169 ]' 00:17:08.169 20:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:08.169 20:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:08.169 20:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:08.169 20:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:08.169 20:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:08.169 20:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:08.169 20:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:08.169 20:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:08.428 20:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-secret DHHC-1:00:YzY2ZjJhNmJmZDliZTE5YTk0MDZmNWRlMjY2ZmM1NWZhODkzYzExZjUwMmY2NGVhRC7lxQ==: --dhchap-ctrl-secret DHHC-1:03:MTcyZmVlMGE2MjM3MThmYTJiMzUyYjJkYzg2OThmOWI4MTAzNTlkYjU5Njk2YTM3M2E5M2M0MWMzN2NjMTU2YnO3Ax0=: 00:17:09.365 20:16:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:09.366 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:09.366 20:16:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:17:09.366 20:16:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.366 20:16:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.366 20:16:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.366 20:16:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:09.366 20:16:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:09.366 20:16:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:09.366 20:16:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:17:09.366 20:16:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:09.366 20:16:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:09.366 20:16:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:09.366 20:16:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:09.366 20:16:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:09.366 20:16:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.366 20:16:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.366 20:16:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.366 20:16:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.366 20:16:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.366 20:16:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.932 00:17:09.932 20:16:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:09.932 20:16:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:09.932 20:16:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.190 20:16:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.190 20:16:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:10.190 20:16:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.190 20:16:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.190 20:16:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.190 20:16:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:10.190 { 00:17:10.190 "auth": { 00:17:10.190 "dhgroup": "ffdhe4096", 00:17:10.190 "digest": "sha256", 00:17:10.190 "state": "completed" 00:17:10.190 }, 00:17:10.190 "cntlid": 27, 00:17:10.190 "listen_address": { 00:17:10.190 "adrfam": "IPv4", 00:17:10.190 "traddr": "10.0.0.2", 00:17:10.190 "trsvcid": "4420", 00:17:10.190 "trtype": "TCP" 00:17:10.190 }, 00:17:10.190 "peer_address": { 00:17:10.190 "adrfam": "IPv4", 00:17:10.190 "traddr": "10.0.0.1", 00:17:10.190 "trsvcid": "58630", 00:17:10.190 "trtype": "TCP" 00:17:10.190 }, 00:17:10.190 "qid": 0, 00:17:10.190 "state": "enabled" 00:17:10.190 } 00:17:10.191 ]' 00:17:10.191 20:16:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:10.191 20:16:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:10.191 20:16:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:10.191 20:16:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:10.191 20:16:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:10.191 20:16:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:10.191 20:16:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:10.191 20:16:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:10.449 20:16:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-secret DHHC-1:01:YjIwMzgwODAzNWNmYTNlNmUzNDE3ZGFmMTQzOWViODGfNR8X: --dhchap-ctrl-secret DHHC-1:02:YjBlNDY1NzgzMWYwY2ViOGZhNWFlOWFkNjAwMzk2YTA4NTkzNTI5NTljZGE3NDYyV+vFWA==: 00:17:11.016 20:17:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:11.016 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:11.016 20:17:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:17:11.016 20:17:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.016 20:17:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.016 20:17:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.016 20:17:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:11.016 20:17:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:11.016 20:17:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:11.274 20:17:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:17:11.533 20:17:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:11.533 20:17:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:11.533 20:17:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:11.533 20:17:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:11.533 20:17:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:11.533 20:17:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:11.533 20:17:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.533 20:17:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.533 20:17:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.533 20:17:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:11.533 20:17:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:11.792 00:17:11.792 20:17:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:11.792 20:17:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:11.792 20:17:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:12.051 20:17:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.051 20:17:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:12.051 20:17:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.051 20:17:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.051 20:17:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.051 20:17:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:12.051 { 00:17:12.051 "auth": { 00:17:12.051 "dhgroup": "ffdhe4096", 00:17:12.051 "digest": "sha256", 00:17:12.051 "state": "completed" 00:17:12.051 }, 00:17:12.051 "cntlid": 29, 00:17:12.051 "listen_address": { 00:17:12.051 "adrfam": "IPv4", 00:17:12.051 "traddr": "10.0.0.2", 00:17:12.051 "trsvcid": "4420", 00:17:12.051 "trtype": "TCP" 00:17:12.051 }, 00:17:12.051 "peer_address": { 00:17:12.051 "adrfam": "IPv4", 00:17:12.051 "traddr": "10.0.0.1", 00:17:12.051 "trsvcid": "58666", 00:17:12.051 "trtype": "TCP" 00:17:12.051 }, 00:17:12.051 "qid": 0, 00:17:12.051 "state": "enabled" 00:17:12.051 } 00:17:12.051 ]' 00:17:12.051 20:17:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:12.051 20:17:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:12.051 20:17:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:12.309 20:17:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:12.309 20:17:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:12.309 20:17:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:12.309 20:17:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:12.309 20:17:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.568 20:17:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-secret DHHC-1:02:OWMzOGI4NDQ4ZjY3NWY2OWQ5ZmI3ZmRkMjQyYTQzMTQ3MzdiZGRhZDBhNzgyNWU1YWhG9g==: --dhchap-ctrl-secret DHHC-1:01:MWExZWIzMWExOGNkY2JiZjM5ZWQ1N2QyNzE4NjJkMDOQlR/2: 00:17:13.135 20:17:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:13.135 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:13.135 20:17:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:17:13.135 20:17:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.135 20:17:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.135 20:17:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.135 20:17:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:13.135 20:17:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:13.135 20:17:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:13.701 20:17:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:17:13.701 20:17:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:13.701 20:17:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:13.701 20:17:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:13.701 20:17:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:13.701 20:17:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:13.701 20:17:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-key key3 00:17:13.701 20:17:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.701 20:17:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.701 20:17:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.701 20:17:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:13.701 20:17:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:13.960 00:17:13.960 20:17:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:13.960 20:17:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:13.960 20:17:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:14.219 20:17:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.219 20:17:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:14.219 20:17:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.219 20:17:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.219 20:17:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.219 20:17:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:14.219 { 00:17:14.219 "auth": { 00:17:14.219 "dhgroup": "ffdhe4096", 00:17:14.219 "digest": "sha256", 00:17:14.219 "state": "completed" 00:17:14.219 }, 00:17:14.219 "cntlid": 31, 00:17:14.219 "listen_address": { 00:17:14.219 "adrfam": "IPv4", 00:17:14.219 "traddr": "10.0.0.2", 00:17:14.219 "trsvcid": "4420", 00:17:14.219 "trtype": "TCP" 00:17:14.219 }, 00:17:14.219 "peer_address": { 00:17:14.219 "adrfam": "IPv4", 00:17:14.219 "traddr": "10.0.0.1", 00:17:14.219 "trsvcid": "58696", 00:17:14.219 "trtype": "TCP" 00:17:14.219 }, 00:17:14.219 "qid": 0, 00:17:14.219 "state": "enabled" 00:17:14.219 } 00:17:14.219 ]' 00:17:14.219 20:17:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:14.219 20:17:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:14.219 20:17:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:14.219 20:17:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:14.219 20:17:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:14.219 20:17:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:14.219 20:17:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:14.219 20:17:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.784 20:17:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-secret DHHC-1:03:MGY2MzI3MDIzM2RkNzVlN2I5YWQ3NWNhZjFhOWE1NjYzYmNjMDY1OTJhNjUxNDJiZWIyNjkxNmRmNTMxZDZiNSX1tGc=: 00:17:15.351 20:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:15.351 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:15.351 20:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:17:15.351 20:17:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.351 20:17:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.351 20:17:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.351 20:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:15.351 20:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:15.351 20:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:15.351 20:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:15.610 20:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:17:15.610 20:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:15.610 20:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:15.610 20:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:15.610 20:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:15.610 20:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:15.610 20:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:15.610 20:17:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.610 20:17:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.610 20:17:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.610 20:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:15.610 20:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:15.868 00:17:16.127 20:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:16.127 20:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:16.127 20:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.386 20:17:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.386 20:17:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:16.386 20:17:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.386 20:17:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.386 20:17:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.386 20:17:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:16.386 { 00:17:16.386 "auth": { 00:17:16.386 "dhgroup": "ffdhe6144", 00:17:16.386 "digest": "sha256", 00:17:16.386 "state": "completed" 00:17:16.386 }, 00:17:16.386 "cntlid": 33, 00:17:16.386 "listen_address": { 00:17:16.386 "adrfam": "IPv4", 00:17:16.386 "traddr": "10.0.0.2", 00:17:16.386 "trsvcid": "4420", 00:17:16.386 "trtype": "TCP" 00:17:16.386 }, 00:17:16.386 "peer_address": { 00:17:16.386 "adrfam": "IPv4", 00:17:16.386 "traddr": "10.0.0.1", 00:17:16.386 "trsvcid": "58730", 00:17:16.386 "trtype": "TCP" 00:17:16.386 }, 00:17:16.386 "qid": 0, 00:17:16.386 "state": "enabled" 00:17:16.386 } 00:17:16.386 ]' 00:17:16.386 20:17:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:16.386 20:17:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:16.386 20:17:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:16.386 20:17:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:16.386 20:17:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:16.386 20:17:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:16.386 20:17:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:16.386 20:17:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:16.645 20:17:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-secret DHHC-1:00:YzY2ZjJhNmJmZDliZTE5YTk0MDZmNWRlMjY2ZmM1NWZhODkzYzExZjUwMmY2NGVhRC7lxQ==: --dhchap-ctrl-secret DHHC-1:03:MTcyZmVlMGE2MjM3MThmYTJiMzUyYjJkYzg2OThmOWI4MTAzNTlkYjU5Njk2YTM3M2E5M2M0MWMzN2NjMTU2YnO3Ax0=: 00:17:17.580 20:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:17.581 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:17.581 20:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:17:17.581 20:17:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.581 20:17:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.581 20:17:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.581 20:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:17.581 20:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:17.581 20:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:17.839 20:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:17:17.839 20:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:17.839 20:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:17.839 20:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:17.839 20:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:17.839 20:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:17.839 20:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.839 20:17:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.839 20:17:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.839 20:17:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.839 20:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.839 20:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:18.097 00:17:18.097 20:17:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:18.097 20:17:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:18.097 20:17:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.357 20:17:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.357 20:17:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.357 20:17:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.357 20:17:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.357 20:17:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.357 20:17:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:18.357 { 00:17:18.357 "auth": { 00:17:18.357 "dhgroup": "ffdhe6144", 00:17:18.357 "digest": "sha256", 00:17:18.357 "state": "completed" 00:17:18.357 }, 00:17:18.357 "cntlid": 35, 00:17:18.357 "listen_address": { 00:17:18.357 "adrfam": "IPv4", 00:17:18.357 "traddr": "10.0.0.2", 00:17:18.357 "trsvcid": "4420", 00:17:18.357 "trtype": "TCP" 00:17:18.357 }, 00:17:18.357 "peer_address": { 00:17:18.357 "adrfam": "IPv4", 00:17:18.357 "traddr": "10.0.0.1", 00:17:18.357 "trsvcid": "56338", 00:17:18.357 "trtype": "TCP" 00:17:18.357 }, 00:17:18.357 "qid": 0, 00:17:18.357 "state": "enabled" 00:17:18.357 } 00:17:18.357 ]' 00:17:18.357 20:17:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:18.616 20:17:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:18.616 20:17:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:18.616 20:17:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:18.616 20:17:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:18.616 20:17:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.616 20:17:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.616 20:17:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:18.875 20:17:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-secret DHHC-1:01:YjIwMzgwODAzNWNmYTNlNmUzNDE3ZGFmMTQzOWViODGfNR8X: --dhchap-ctrl-secret DHHC-1:02:YjBlNDY1NzgzMWYwY2ViOGZhNWFlOWFkNjAwMzk2YTA4NTkzNTI5NTljZGE3NDYyV+vFWA==: 00:17:19.442 20:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.442 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.442 20:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:17:19.442 20:17:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.442 20:17:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.442 20:17:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.442 20:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:19.442 20:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:19.442 20:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:19.701 20:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:17:19.701 20:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:19.701 20:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:19.701 20:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:19.701 20:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:19.701 20:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:19.701 20:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:19.701 20:17:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.701 20:17:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.701 20:17:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.701 20:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:19.701 20:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:20.269 00:17:20.269 20:17:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:20.269 20:17:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:20.269 20:17:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.529 20:17:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.529 20:17:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:20.529 20:17:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.529 20:17:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.529 20:17:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.529 20:17:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:20.529 { 00:17:20.529 "auth": { 00:17:20.529 "dhgroup": "ffdhe6144", 00:17:20.529 "digest": "sha256", 00:17:20.529 "state": "completed" 00:17:20.529 }, 00:17:20.529 "cntlid": 37, 00:17:20.529 "listen_address": { 00:17:20.529 "adrfam": "IPv4", 00:17:20.529 "traddr": "10.0.0.2", 00:17:20.529 "trsvcid": "4420", 00:17:20.529 "trtype": "TCP" 00:17:20.529 }, 00:17:20.529 "peer_address": { 00:17:20.529 "adrfam": "IPv4", 00:17:20.529 "traddr": "10.0.0.1", 00:17:20.529 "trsvcid": "56360", 00:17:20.529 "trtype": "TCP" 00:17:20.529 }, 00:17:20.529 "qid": 0, 00:17:20.529 "state": "enabled" 00:17:20.529 } 00:17:20.529 ]' 00:17:20.529 20:17:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:20.529 20:17:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:20.529 20:17:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:20.529 20:17:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:20.529 20:17:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:20.788 20:17:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:20.788 20:17:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:20.788 20:17:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.788 20:17:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-secret DHHC-1:02:OWMzOGI4NDQ4ZjY3NWY2OWQ5ZmI3ZmRkMjQyYTQzMTQ3MzdiZGRhZDBhNzgyNWU1YWhG9g==: --dhchap-ctrl-secret DHHC-1:01:MWExZWIzMWExOGNkY2JiZjM5ZWQ1N2QyNzE4NjJkMDOQlR/2: 00:17:21.724 20:17:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.724 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.724 20:17:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:17:21.724 20:17:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.724 20:17:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.724 20:17:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.724 20:17:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:21.724 20:17:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:21.725 20:17:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:21.725 20:17:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:17:21.725 20:17:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:21.725 20:17:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:21.725 20:17:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:21.725 20:17:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:21.725 20:17:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:21.725 20:17:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-key key3 00:17:21.725 20:17:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.725 20:17:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.725 20:17:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.725 20:17:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:21.725 20:17:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:22.292 00:17:22.292 20:17:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:22.292 20:17:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:22.292 20:17:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.551 20:17:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.551 20:17:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:22.551 20:17:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.551 20:17:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.551 20:17:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.551 20:17:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:22.551 { 00:17:22.551 "auth": { 00:17:22.551 "dhgroup": "ffdhe6144", 00:17:22.551 "digest": "sha256", 00:17:22.551 "state": "completed" 00:17:22.551 }, 00:17:22.551 "cntlid": 39, 00:17:22.551 "listen_address": { 00:17:22.551 "adrfam": "IPv4", 00:17:22.551 "traddr": "10.0.0.2", 00:17:22.551 "trsvcid": "4420", 00:17:22.551 "trtype": "TCP" 00:17:22.551 }, 00:17:22.551 "peer_address": { 00:17:22.551 "adrfam": "IPv4", 00:17:22.551 "traddr": "10.0.0.1", 00:17:22.551 "trsvcid": "56392", 00:17:22.551 "trtype": "TCP" 00:17:22.551 }, 00:17:22.551 "qid": 0, 00:17:22.551 "state": "enabled" 00:17:22.551 } 00:17:22.551 ]' 00:17:22.551 20:17:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:22.551 20:17:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:22.551 20:17:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:22.551 20:17:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:22.551 20:17:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:22.551 20:17:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:22.551 20:17:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:22.551 20:17:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:22.809 20:17:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-secret DHHC-1:03:MGY2MzI3MDIzM2RkNzVlN2I5YWQ3NWNhZjFhOWE1NjYzYmNjMDY1OTJhNjUxNDJiZWIyNjkxNmRmNTMxZDZiNSX1tGc=: 00:17:23.376 20:17:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:23.376 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:23.376 20:17:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:17:23.376 20:17:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.376 20:17:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.376 20:17:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.376 20:17:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:23.376 20:17:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:23.376 20:17:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:23.376 20:17:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:23.635 20:17:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:17:23.635 20:17:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:23.635 20:17:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:23.635 20:17:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:23.635 20:17:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:23.635 20:17:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:23.635 20:17:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:23.635 20:17:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.635 20:17:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.635 20:17:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.635 20:17:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:23.635 20:17:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:24.202 00:17:24.203 20:17:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:24.203 20:17:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.203 20:17:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:24.461 20:17:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.461 20:17:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:24.461 20:17:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.461 20:17:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.461 20:17:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.461 20:17:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:24.461 { 00:17:24.461 "auth": { 00:17:24.461 "dhgroup": "ffdhe8192", 00:17:24.461 "digest": "sha256", 00:17:24.461 "state": "completed" 00:17:24.461 }, 00:17:24.461 "cntlid": 41, 00:17:24.461 "listen_address": { 00:17:24.461 "adrfam": "IPv4", 00:17:24.461 "traddr": "10.0.0.2", 00:17:24.461 "trsvcid": "4420", 00:17:24.461 "trtype": "TCP" 00:17:24.461 }, 00:17:24.461 "peer_address": { 00:17:24.461 "adrfam": "IPv4", 00:17:24.461 "traddr": "10.0.0.1", 00:17:24.461 "trsvcid": "56416", 00:17:24.461 "trtype": "TCP" 00:17:24.461 }, 00:17:24.461 "qid": 0, 00:17:24.461 "state": "enabled" 00:17:24.461 } 00:17:24.461 ]' 00:17:24.461 20:17:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:24.718 20:17:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:24.718 20:17:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:24.718 20:17:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:24.718 20:17:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:24.718 20:17:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:24.718 20:17:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:24.718 20:17:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:24.975 20:17:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-secret DHHC-1:00:YzY2ZjJhNmJmZDliZTE5YTk0MDZmNWRlMjY2ZmM1NWZhODkzYzExZjUwMmY2NGVhRC7lxQ==: --dhchap-ctrl-secret DHHC-1:03:MTcyZmVlMGE2MjM3MThmYTJiMzUyYjJkYzg2OThmOWI4MTAzNTlkYjU5Njk2YTM3M2E5M2M0MWMzN2NjMTU2YnO3Ax0=: 00:17:25.908 20:17:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:25.908 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:25.908 20:17:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:17:25.908 20:17:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.908 20:17:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.908 20:17:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.908 20:17:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:25.908 20:17:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:25.908 20:17:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:25.908 20:17:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:17:25.908 20:17:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:25.908 20:17:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:25.908 20:17:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:25.908 20:17:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:25.908 20:17:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:25.908 20:17:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:25.908 20:17:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.908 20:17:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.908 20:17:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.908 20:17:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:25.908 20:17:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.491 00:17:26.491 20:17:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:26.491 20:17:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.491 20:17:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:26.852 20:17:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.852 20:17:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:26.852 20:17:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.852 20:17:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.852 20:17:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.852 20:17:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:26.852 { 00:17:26.852 "auth": { 00:17:26.852 "dhgroup": "ffdhe8192", 00:17:26.852 "digest": "sha256", 00:17:26.852 "state": "completed" 00:17:26.852 }, 00:17:26.852 "cntlid": 43, 00:17:26.852 "listen_address": { 00:17:26.852 "adrfam": "IPv4", 00:17:26.852 "traddr": "10.0.0.2", 00:17:26.852 "trsvcid": "4420", 00:17:26.852 "trtype": "TCP" 00:17:26.852 }, 00:17:26.852 "peer_address": { 00:17:26.852 "adrfam": "IPv4", 00:17:26.852 "traddr": "10.0.0.1", 00:17:26.852 "trsvcid": "56442", 00:17:26.852 "trtype": "TCP" 00:17:26.852 }, 00:17:26.852 "qid": 0, 00:17:26.852 "state": "enabled" 00:17:26.852 } 00:17:26.852 ]' 00:17:26.852 20:17:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:26.852 20:17:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:26.852 20:17:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:26.852 20:17:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:26.852 20:17:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:27.121 20:17:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:27.121 20:17:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:27.121 20:17:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:27.378 20:17:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-secret DHHC-1:01:YjIwMzgwODAzNWNmYTNlNmUzNDE3ZGFmMTQzOWViODGfNR8X: --dhchap-ctrl-secret DHHC-1:02:YjBlNDY1NzgzMWYwY2ViOGZhNWFlOWFkNjAwMzk2YTA4NTkzNTI5NTljZGE3NDYyV+vFWA==: 00:17:27.940 20:17:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:27.940 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:27.940 20:17:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:17:27.940 20:17:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.940 20:17:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.940 20:17:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.940 20:17:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:27.940 20:17:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:27.940 20:17:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:28.198 20:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:17:28.198 20:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:28.198 20:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:28.198 20:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:28.198 20:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:28.198 20:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:28.198 20:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.198 20:17:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.198 20:17:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.198 20:17:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.198 20:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.198 20:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.765 00:17:28.766 20:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:28.766 20:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.766 20:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:29.024 20:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.024 20:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:29.024 20:17:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.024 20:17:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.024 20:17:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.024 20:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:29.024 { 00:17:29.024 "auth": { 00:17:29.024 "dhgroup": "ffdhe8192", 00:17:29.024 "digest": "sha256", 00:17:29.024 "state": "completed" 00:17:29.024 }, 00:17:29.024 "cntlid": 45, 00:17:29.024 "listen_address": { 00:17:29.024 "adrfam": "IPv4", 00:17:29.024 "traddr": "10.0.0.2", 00:17:29.024 "trsvcid": "4420", 00:17:29.024 "trtype": "TCP" 00:17:29.024 }, 00:17:29.024 "peer_address": { 00:17:29.024 "adrfam": "IPv4", 00:17:29.024 "traddr": "10.0.0.1", 00:17:29.024 "trsvcid": "40490", 00:17:29.024 "trtype": "TCP" 00:17:29.024 }, 00:17:29.024 "qid": 0, 00:17:29.024 "state": "enabled" 00:17:29.024 } 00:17:29.024 ]' 00:17:29.024 20:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:29.024 20:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:29.024 20:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:29.024 20:17:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:29.024 20:17:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:29.024 20:17:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:29.024 20:17:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:29.024 20:17:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:29.282 20:17:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-secret DHHC-1:02:OWMzOGI4NDQ4ZjY3NWY2OWQ5ZmI3ZmRkMjQyYTQzMTQ3MzdiZGRhZDBhNzgyNWU1YWhG9g==: --dhchap-ctrl-secret DHHC-1:01:MWExZWIzMWExOGNkY2JiZjM5ZWQ1N2QyNzE4NjJkMDOQlR/2: 00:17:29.848 20:17:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:29.849 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:29.849 20:17:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:17:29.849 20:17:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.849 20:17:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.849 20:17:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.849 20:17:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:29.849 20:17:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:29.849 20:17:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:30.106 20:17:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:17:30.106 20:17:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:30.106 20:17:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:30.106 20:17:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:30.106 20:17:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:30.106 20:17:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:30.106 20:17:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-key key3 00:17:30.106 20:17:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.106 20:17:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.106 20:17:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.106 20:17:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:30.106 20:17:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:30.671 00:17:30.671 20:17:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:30.671 20:17:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:30.671 20:17:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.235 20:17:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.235 20:17:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:31.235 20:17:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.235 20:17:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.235 20:17:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.235 20:17:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:31.235 { 00:17:31.235 "auth": { 00:17:31.235 "dhgroup": "ffdhe8192", 00:17:31.235 "digest": "sha256", 00:17:31.235 "state": "completed" 00:17:31.235 }, 00:17:31.235 "cntlid": 47, 00:17:31.235 "listen_address": { 00:17:31.235 "adrfam": "IPv4", 00:17:31.235 "traddr": "10.0.0.2", 00:17:31.235 "trsvcid": "4420", 00:17:31.235 "trtype": "TCP" 00:17:31.235 }, 00:17:31.235 "peer_address": { 00:17:31.235 "adrfam": "IPv4", 00:17:31.235 "traddr": "10.0.0.1", 00:17:31.235 "trsvcid": "40516", 00:17:31.235 "trtype": "TCP" 00:17:31.235 }, 00:17:31.235 "qid": 0, 00:17:31.235 "state": "enabled" 00:17:31.235 } 00:17:31.235 ]' 00:17:31.235 20:17:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:31.235 20:17:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:31.235 20:17:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:31.235 20:17:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:31.235 20:17:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:31.235 20:17:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:31.235 20:17:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:31.235 20:17:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:31.493 20:17:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-secret DHHC-1:03:MGY2MzI3MDIzM2RkNzVlN2I5YWQ3NWNhZjFhOWE1NjYzYmNjMDY1OTJhNjUxNDJiZWIyNjkxNmRmNTMxZDZiNSX1tGc=: 00:17:32.060 20:17:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:32.060 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:32.060 20:17:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:17:32.060 20:17:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.060 20:17:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.060 20:17:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.060 20:17:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:17:32.060 20:17:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:32.060 20:17:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:32.060 20:17:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:32.060 20:17:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:32.318 20:17:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:17:32.318 20:17:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:32.319 20:17:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:32.319 20:17:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:32.319 20:17:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:32.319 20:17:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:32.319 20:17:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:32.319 20:17:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.319 20:17:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.319 20:17:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.319 20:17:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:32.319 20:17:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:32.886 00:17:32.886 20:17:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:32.886 20:17:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:32.886 20:17:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:33.144 20:17:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.144 20:17:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:33.144 20:17:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.144 20:17:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.144 20:17:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.144 20:17:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:33.144 { 00:17:33.144 "auth": { 00:17:33.144 "dhgroup": "null", 00:17:33.144 "digest": "sha384", 00:17:33.144 "state": "completed" 00:17:33.144 }, 00:17:33.144 "cntlid": 49, 00:17:33.144 "listen_address": { 00:17:33.144 "adrfam": "IPv4", 00:17:33.144 "traddr": "10.0.0.2", 00:17:33.144 "trsvcid": "4420", 00:17:33.144 "trtype": "TCP" 00:17:33.144 }, 00:17:33.144 "peer_address": { 00:17:33.144 "adrfam": "IPv4", 00:17:33.144 "traddr": "10.0.0.1", 00:17:33.144 "trsvcid": "40538", 00:17:33.144 "trtype": "TCP" 00:17:33.144 }, 00:17:33.144 "qid": 0, 00:17:33.145 "state": "enabled" 00:17:33.145 } 00:17:33.145 ]' 00:17:33.145 20:17:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:33.145 20:17:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:33.145 20:17:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:33.145 20:17:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:33.145 20:17:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:33.145 20:17:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:33.145 20:17:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:33.145 20:17:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:33.403 20:17:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-secret DHHC-1:00:YzY2ZjJhNmJmZDliZTE5YTk0MDZmNWRlMjY2ZmM1NWZhODkzYzExZjUwMmY2NGVhRC7lxQ==: --dhchap-ctrl-secret DHHC-1:03:MTcyZmVlMGE2MjM3MThmYTJiMzUyYjJkYzg2OThmOWI4MTAzNTlkYjU5Njk2YTM3M2E5M2M0MWMzN2NjMTU2YnO3Ax0=: 00:17:33.970 20:17:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:33.970 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:33.970 20:17:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:17:33.970 20:17:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.970 20:17:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.970 20:17:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.970 20:17:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:33.970 20:17:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:33.970 20:17:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:34.229 20:17:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:17:34.229 20:17:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:34.229 20:17:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:34.229 20:17:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:34.229 20:17:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:34.229 20:17:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:34.229 20:17:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:34.229 20:17:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.229 20:17:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.229 20:17:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.229 20:17:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:34.229 20:17:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:34.487 00:17:34.745 20:17:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:34.745 20:17:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:34.745 20:17:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.745 20:17:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.745 20:17:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:34.746 20:17:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.746 20:17:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.746 20:17:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.746 20:17:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:34.746 { 00:17:34.746 "auth": { 00:17:34.746 "dhgroup": "null", 00:17:34.746 "digest": "sha384", 00:17:34.746 "state": "completed" 00:17:34.746 }, 00:17:34.746 "cntlid": 51, 00:17:34.746 "listen_address": { 00:17:34.746 "adrfam": "IPv4", 00:17:34.746 "traddr": "10.0.0.2", 00:17:34.746 "trsvcid": "4420", 00:17:34.746 "trtype": "TCP" 00:17:34.746 }, 00:17:34.746 "peer_address": { 00:17:34.746 "adrfam": "IPv4", 00:17:34.746 "traddr": "10.0.0.1", 00:17:34.746 "trsvcid": "40554", 00:17:34.746 "trtype": "TCP" 00:17:34.746 }, 00:17:34.746 "qid": 0, 00:17:34.746 "state": "enabled" 00:17:34.746 } 00:17:34.746 ]' 00:17:35.004 20:17:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:35.004 20:17:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:35.004 20:17:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:35.004 20:17:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:35.004 20:17:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:35.004 20:17:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:35.004 20:17:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:35.004 20:17:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:35.262 20:17:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-secret DHHC-1:01:YjIwMzgwODAzNWNmYTNlNmUzNDE3ZGFmMTQzOWViODGfNR8X: --dhchap-ctrl-secret DHHC-1:02:YjBlNDY1NzgzMWYwY2ViOGZhNWFlOWFkNjAwMzk2YTA4NTkzNTI5NTljZGE3NDYyV+vFWA==: 00:17:35.829 20:17:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:35.829 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:35.829 20:17:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:17:35.829 20:17:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.829 20:17:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.829 20:17:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.829 20:17:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:35.829 20:17:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:35.829 20:17:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:36.088 20:17:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:17:36.088 20:17:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:36.088 20:17:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:36.088 20:17:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:36.088 20:17:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:36.088 20:17:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:36.088 20:17:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:36.088 20:17:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.088 20:17:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.088 20:17:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.088 20:17:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:36.088 20:17:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:36.347 00:17:36.347 20:17:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:36.347 20:17:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:36.347 20:17:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:36.915 20:17:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.915 20:17:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:36.915 20:17:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.915 20:17:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.915 20:17:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.915 20:17:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:36.915 { 00:17:36.915 "auth": { 00:17:36.915 "dhgroup": "null", 00:17:36.915 "digest": "sha384", 00:17:36.915 "state": "completed" 00:17:36.915 }, 00:17:36.915 "cntlid": 53, 00:17:36.915 "listen_address": { 00:17:36.915 "adrfam": "IPv4", 00:17:36.915 "traddr": "10.0.0.2", 00:17:36.915 "trsvcid": "4420", 00:17:36.915 "trtype": "TCP" 00:17:36.915 }, 00:17:36.915 "peer_address": { 00:17:36.915 "adrfam": "IPv4", 00:17:36.915 "traddr": "10.0.0.1", 00:17:36.915 "trsvcid": "40580", 00:17:36.915 "trtype": "TCP" 00:17:36.915 }, 00:17:36.915 "qid": 0, 00:17:36.915 "state": "enabled" 00:17:36.915 } 00:17:36.915 ]' 00:17:36.915 20:17:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:36.915 20:17:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:36.915 20:17:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:36.915 20:17:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:36.915 20:17:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:36.915 20:17:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:36.915 20:17:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:36.915 20:17:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:37.174 20:17:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-secret DHHC-1:02:OWMzOGI4NDQ4ZjY3NWY2OWQ5ZmI3ZmRkMjQyYTQzMTQ3MzdiZGRhZDBhNzgyNWU1YWhG9g==: --dhchap-ctrl-secret DHHC-1:01:MWExZWIzMWExOGNkY2JiZjM5ZWQ1N2QyNzE4NjJkMDOQlR/2: 00:17:37.742 20:17:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:37.742 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:37.742 20:17:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:17:37.742 20:17:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.742 20:17:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.742 20:17:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.743 20:17:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:37.743 20:17:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:37.743 20:17:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:38.001 20:17:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:17:38.002 20:17:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:38.002 20:17:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:38.002 20:17:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:38.002 20:17:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:38.002 20:17:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:38.002 20:17:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-key key3 00:17:38.002 20:17:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.002 20:17:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.002 20:17:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.002 20:17:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:38.002 20:17:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:38.260 00:17:38.260 20:17:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:38.260 20:17:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:38.260 20:17:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:38.520 20:17:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.520 20:17:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:38.520 20:17:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.520 20:17:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.520 20:17:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.520 20:17:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:38.520 { 00:17:38.520 "auth": { 00:17:38.520 "dhgroup": "null", 00:17:38.520 "digest": "sha384", 00:17:38.520 "state": "completed" 00:17:38.520 }, 00:17:38.520 "cntlid": 55, 00:17:38.520 "listen_address": { 00:17:38.520 "adrfam": "IPv4", 00:17:38.520 "traddr": "10.0.0.2", 00:17:38.520 "trsvcid": "4420", 00:17:38.520 "trtype": "TCP" 00:17:38.520 }, 00:17:38.520 "peer_address": { 00:17:38.520 "adrfam": "IPv4", 00:17:38.520 "traddr": "10.0.0.1", 00:17:38.520 "trsvcid": "35880", 00:17:38.520 "trtype": "TCP" 00:17:38.520 }, 00:17:38.520 "qid": 0, 00:17:38.520 "state": "enabled" 00:17:38.520 } 00:17:38.520 ]' 00:17:38.520 20:17:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:38.520 20:17:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:38.520 20:17:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:38.520 20:17:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:38.520 20:17:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:38.779 20:17:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:38.779 20:17:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:38.779 20:17:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:39.038 20:17:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-secret DHHC-1:03:MGY2MzI3MDIzM2RkNzVlN2I5YWQ3NWNhZjFhOWE1NjYzYmNjMDY1OTJhNjUxNDJiZWIyNjkxNmRmNTMxZDZiNSX1tGc=: 00:17:39.606 20:17:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:39.606 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:39.606 20:17:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:17:39.606 20:17:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.606 20:17:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.606 20:17:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.606 20:17:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:39.606 20:17:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:39.606 20:17:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:39.606 20:17:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:39.865 20:17:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:17:39.865 20:17:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:39.865 20:17:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:39.865 20:17:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:39.865 20:17:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:39.865 20:17:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:39.865 20:17:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:39.865 20:17:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.865 20:17:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.865 20:17:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.865 20:17:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:39.865 20:17:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:40.125 00:17:40.384 20:17:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:40.384 20:17:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:40.384 20:17:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:40.643 20:17:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.643 20:17:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:40.643 20:17:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.643 20:17:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.643 20:17:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.643 20:17:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:40.643 { 00:17:40.643 "auth": { 00:17:40.643 "dhgroup": "ffdhe2048", 00:17:40.643 "digest": "sha384", 00:17:40.643 "state": "completed" 00:17:40.643 }, 00:17:40.643 "cntlid": 57, 00:17:40.643 "listen_address": { 00:17:40.643 "adrfam": "IPv4", 00:17:40.643 "traddr": "10.0.0.2", 00:17:40.643 "trsvcid": "4420", 00:17:40.643 "trtype": "TCP" 00:17:40.643 }, 00:17:40.643 "peer_address": { 00:17:40.643 "adrfam": "IPv4", 00:17:40.643 "traddr": "10.0.0.1", 00:17:40.643 "trsvcid": "35910", 00:17:40.643 "trtype": "TCP" 00:17:40.643 }, 00:17:40.643 "qid": 0, 00:17:40.643 "state": "enabled" 00:17:40.643 } 00:17:40.643 ]' 00:17:40.643 20:17:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:40.643 20:17:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:40.643 20:17:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:40.643 20:17:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:40.643 20:17:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:40.643 20:17:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:40.643 20:17:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:40.643 20:17:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:40.902 20:17:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-secret DHHC-1:00:YzY2ZjJhNmJmZDliZTE5YTk0MDZmNWRlMjY2ZmM1NWZhODkzYzExZjUwMmY2NGVhRC7lxQ==: --dhchap-ctrl-secret DHHC-1:03:MTcyZmVlMGE2MjM3MThmYTJiMzUyYjJkYzg2OThmOWI4MTAzNTlkYjU5Njk2YTM3M2E5M2M0MWMzN2NjMTU2YnO3Ax0=: 00:17:41.469 20:17:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:41.469 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:41.469 20:17:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:17:41.469 20:17:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.469 20:17:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.469 20:17:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.469 20:17:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:41.469 20:17:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:41.469 20:17:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:41.727 20:17:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:17:41.727 20:17:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:41.727 20:17:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:41.727 20:17:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:41.727 20:17:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:41.727 20:17:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:41.727 20:17:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:41.727 20:17:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.727 20:17:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.727 20:17:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.727 20:17:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:41.727 20:17:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:42.294 00:17:42.294 20:17:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:42.294 20:17:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:42.294 20:17:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:42.552 20:17:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.552 20:17:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:42.552 20:17:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.552 20:17:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.552 20:17:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.552 20:17:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:42.552 { 00:17:42.552 "auth": { 00:17:42.552 "dhgroup": "ffdhe2048", 00:17:42.552 "digest": "sha384", 00:17:42.552 "state": "completed" 00:17:42.552 }, 00:17:42.552 "cntlid": 59, 00:17:42.552 "listen_address": { 00:17:42.552 "adrfam": "IPv4", 00:17:42.552 "traddr": "10.0.0.2", 00:17:42.552 "trsvcid": "4420", 00:17:42.552 "trtype": "TCP" 00:17:42.552 }, 00:17:42.552 "peer_address": { 00:17:42.552 "adrfam": "IPv4", 00:17:42.552 "traddr": "10.0.0.1", 00:17:42.552 "trsvcid": "35922", 00:17:42.552 "trtype": "TCP" 00:17:42.552 }, 00:17:42.552 "qid": 0, 00:17:42.552 "state": "enabled" 00:17:42.552 } 00:17:42.552 ]' 00:17:42.552 20:17:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:42.552 20:17:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:42.552 20:17:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:42.552 20:17:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:42.552 20:17:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:42.552 20:17:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:42.552 20:17:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:42.552 20:17:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:42.811 20:17:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-secret DHHC-1:01:YjIwMzgwODAzNWNmYTNlNmUzNDE3ZGFmMTQzOWViODGfNR8X: --dhchap-ctrl-secret DHHC-1:02:YjBlNDY1NzgzMWYwY2ViOGZhNWFlOWFkNjAwMzk2YTA4NTkzNTI5NTljZGE3NDYyV+vFWA==: 00:17:43.747 20:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:43.747 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:43.747 20:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:17:43.747 20:17:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.747 20:17:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.747 20:17:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.747 20:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:43.747 20:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:43.747 20:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:44.006 20:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:17:44.006 20:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:44.006 20:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:44.006 20:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:44.006 20:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:44.006 20:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:44.006 20:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:44.006 20:17:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.006 20:17:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.006 20:17:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.006 20:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:44.006 20:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:44.265 00:17:44.265 20:17:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:44.265 20:17:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.265 20:17:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:44.522 20:17:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.522 20:17:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.522 20:17:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.522 20:17:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.522 20:17:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.522 20:17:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:44.522 { 00:17:44.522 "auth": { 00:17:44.522 "dhgroup": "ffdhe2048", 00:17:44.522 "digest": "sha384", 00:17:44.522 "state": "completed" 00:17:44.522 }, 00:17:44.522 "cntlid": 61, 00:17:44.522 "listen_address": { 00:17:44.522 "adrfam": "IPv4", 00:17:44.522 "traddr": "10.0.0.2", 00:17:44.522 "trsvcid": "4420", 00:17:44.522 "trtype": "TCP" 00:17:44.522 }, 00:17:44.522 "peer_address": { 00:17:44.522 "adrfam": "IPv4", 00:17:44.522 "traddr": "10.0.0.1", 00:17:44.522 "trsvcid": "35944", 00:17:44.522 "trtype": "TCP" 00:17:44.522 }, 00:17:44.522 "qid": 0, 00:17:44.522 "state": "enabled" 00:17:44.522 } 00:17:44.522 ]' 00:17:44.522 20:17:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:44.522 20:17:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:44.522 20:17:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:44.522 20:17:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:44.522 20:17:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:44.779 20:17:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:44.779 20:17:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.779 20:17:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:45.036 20:17:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-secret DHHC-1:02:OWMzOGI4NDQ4ZjY3NWY2OWQ5ZmI3ZmRkMjQyYTQzMTQ3MzdiZGRhZDBhNzgyNWU1YWhG9g==: --dhchap-ctrl-secret DHHC-1:01:MWExZWIzMWExOGNkY2JiZjM5ZWQ1N2QyNzE4NjJkMDOQlR/2: 00:17:45.600 20:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:45.600 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:45.600 20:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:17:45.600 20:17:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.600 20:17:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.600 20:17:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.600 20:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:45.600 20:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:45.600 20:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:45.858 20:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:17:45.858 20:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:45.858 20:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:45.858 20:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:45.858 20:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:45.858 20:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:45.858 20:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-key key3 00:17:45.858 20:17:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.858 20:17:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.858 20:17:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.858 20:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:45.858 20:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:46.424 00:17:46.424 20:17:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:46.424 20:17:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:46.424 20:17:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.683 20:17:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.683 20:17:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:46.683 20:17:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.683 20:17:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.683 20:17:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.683 20:17:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:46.683 { 00:17:46.683 "auth": { 00:17:46.683 "dhgroup": "ffdhe2048", 00:17:46.683 "digest": "sha384", 00:17:46.683 "state": "completed" 00:17:46.683 }, 00:17:46.683 "cntlid": 63, 00:17:46.683 "listen_address": { 00:17:46.683 "adrfam": "IPv4", 00:17:46.683 "traddr": "10.0.0.2", 00:17:46.683 "trsvcid": "4420", 00:17:46.683 "trtype": "TCP" 00:17:46.683 }, 00:17:46.683 "peer_address": { 00:17:46.683 "adrfam": "IPv4", 00:17:46.683 "traddr": "10.0.0.1", 00:17:46.683 "trsvcid": "35980", 00:17:46.683 "trtype": "TCP" 00:17:46.683 }, 00:17:46.683 "qid": 0, 00:17:46.683 "state": "enabled" 00:17:46.683 } 00:17:46.683 ]' 00:17:46.683 20:17:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:46.683 20:17:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:46.683 20:17:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:46.683 20:17:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:46.683 20:17:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:46.683 20:17:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.683 20:17:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.683 20:17:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:46.941 20:17:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-secret DHHC-1:03:MGY2MzI3MDIzM2RkNzVlN2I5YWQ3NWNhZjFhOWE1NjYzYmNjMDY1OTJhNjUxNDJiZWIyNjkxNmRmNTMxZDZiNSX1tGc=: 00:17:47.508 20:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.508 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.508 20:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:17:47.508 20:17:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.508 20:17:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.508 20:17:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.508 20:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:47.508 20:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:47.508 20:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:47.508 20:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:48.088 20:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:17:48.088 20:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:48.088 20:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:48.088 20:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:48.088 20:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:48.088 20:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:48.088 20:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:48.088 20:17:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.088 20:17:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.088 20:17:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.088 20:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:48.088 20:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:48.088 00:17:48.354 20:17:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:48.354 20:17:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:48.354 20:17:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:48.612 20:17:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.612 20:17:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:48.612 20:17:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.612 20:17:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.612 20:17:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.613 20:17:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:48.613 { 00:17:48.613 "auth": { 00:17:48.613 "dhgroup": "ffdhe3072", 00:17:48.613 "digest": "sha384", 00:17:48.613 "state": "completed" 00:17:48.613 }, 00:17:48.613 "cntlid": 65, 00:17:48.613 "listen_address": { 00:17:48.613 "adrfam": "IPv4", 00:17:48.613 "traddr": "10.0.0.2", 00:17:48.613 "trsvcid": "4420", 00:17:48.613 "trtype": "TCP" 00:17:48.613 }, 00:17:48.613 "peer_address": { 00:17:48.613 "adrfam": "IPv4", 00:17:48.613 "traddr": "10.0.0.1", 00:17:48.613 "trsvcid": "41140", 00:17:48.613 "trtype": "TCP" 00:17:48.613 }, 00:17:48.613 "qid": 0, 00:17:48.613 "state": "enabled" 00:17:48.613 } 00:17:48.613 ]' 00:17:48.613 20:17:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:48.613 20:17:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:48.613 20:17:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:48.613 20:17:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:48.613 20:17:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:48.613 20:17:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:48.613 20:17:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:48.613 20:17:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:48.871 20:17:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-secret DHHC-1:00:YzY2ZjJhNmJmZDliZTE5YTk0MDZmNWRlMjY2ZmM1NWZhODkzYzExZjUwMmY2NGVhRC7lxQ==: --dhchap-ctrl-secret DHHC-1:03:MTcyZmVlMGE2MjM3MThmYTJiMzUyYjJkYzg2OThmOWI4MTAzNTlkYjU5Njk2YTM3M2E5M2M0MWMzN2NjMTU2YnO3Ax0=: 00:17:49.438 20:17:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:49.438 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:49.438 20:17:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:17:49.438 20:17:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.438 20:17:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.438 20:17:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.438 20:17:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:49.438 20:17:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:49.438 20:17:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:49.697 20:17:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:17:49.697 20:17:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:49.697 20:17:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:49.697 20:17:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:49.697 20:17:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:49.697 20:17:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:49.697 20:17:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:49.697 20:17:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.697 20:17:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.697 20:17:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.697 20:17:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:49.697 20:17:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.264 00:17:50.264 20:17:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:50.264 20:17:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:50.264 20:17:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.522 20:17:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.522 20:17:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:50.522 20:17:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.522 20:17:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.522 20:17:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.522 20:17:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:50.522 { 00:17:50.522 "auth": { 00:17:50.522 "dhgroup": "ffdhe3072", 00:17:50.522 "digest": "sha384", 00:17:50.522 "state": "completed" 00:17:50.522 }, 00:17:50.522 "cntlid": 67, 00:17:50.522 "listen_address": { 00:17:50.522 "adrfam": "IPv4", 00:17:50.522 "traddr": "10.0.0.2", 00:17:50.522 "trsvcid": "4420", 00:17:50.522 "trtype": "TCP" 00:17:50.522 }, 00:17:50.522 "peer_address": { 00:17:50.522 "adrfam": "IPv4", 00:17:50.522 "traddr": "10.0.0.1", 00:17:50.522 "trsvcid": "41176", 00:17:50.522 "trtype": "TCP" 00:17:50.522 }, 00:17:50.522 "qid": 0, 00:17:50.522 "state": "enabled" 00:17:50.522 } 00:17:50.522 ]' 00:17:50.522 20:17:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:50.522 20:17:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:50.522 20:17:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:50.522 20:17:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:50.522 20:17:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:50.522 20:17:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:50.522 20:17:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:50.522 20:17:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.088 20:17:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-secret DHHC-1:01:YjIwMzgwODAzNWNmYTNlNmUzNDE3ZGFmMTQzOWViODGfNR8X: --dhchap-ctrl-secret DHHC-1:02:YjBlNDY1NzgzMWYwY2ViOGZhNWFlOWFkNjAwMzk2YTA4NTkzNTI5NTljZGE3NDYyV+vFWA==: 00:17:51.346 20:17:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:51.604 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:51.604 20:17:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:17:51.604 20:17:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.604 20:17:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.604 20:17:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.604 20:17:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:51.604 20:17:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:51.604 20:17:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:51.862 20:17:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:17:51.862 20:17:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:51.862 20:17:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:51.862 20:17:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:51.862 20:17:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:51.862 20:17:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:51.862 20:17:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:51.862 20:17:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.862 20:17:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.862 20:17:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.862 20:17:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:51.862 20:17:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:52.120 00:17:52.120 20:17:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:52.120 20:17:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:52.120 20:17:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:52.378 20:17:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.378 20:17:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:52.378 20:17:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.378 20:17:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.378 20:17:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.378 20:17:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:52.378 { 00:17:52.378 "auth": { 00:17:52.378 "dhgroup": "ffdhe3072", 00:17:52.378 "digest": "sha384", 00:17:52.378 "state": "completed" 00:17:52.378 }, 00:17:52.378 "cntlid": 69, 00:17:52.378 "listen_address": { 00:17:52.378 "adrfam": "IPv4", 00:17:52.378 "traddr": "10.0.0.2", 00:17:52.378 "trsvcid": "4420", 00:17:52.378 "trtype": "TCP" 00:17:52.378 }, 00:17:52.378 "peer_address": { 00:17:52.378 "adrfam": "IPv4", 00:17:52.378 "traddr": "10.0.0.1", 00:17:52.378 "trsvcid": "41222", 00:17:52.378 "trtype": "TCP" 00:17:52.378 }, 00:17:52.378 "qid": 0, 00:17:52.378 "state": "enabled" 00:17:52.378 } 00:17:52.378 ]' 00:17:52.378 20:17:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:52.378 20:17:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:52.378 20:17:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:52.378 20:17:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:52.378 20:17:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:52.636 20:17:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:52.636 20:17:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:52.636 20:17:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:52.636 20:17:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-secret DHHC-1:02:OWMzOGI4NDQ4ZjY3NWY2OWQ5ZmI3ZmRkMjQyYTQzMTQ3MzdiZGRhZDBhNzgyNWU1YWhG9g==: --dhchap-ctrl-secret DHHC-1:01:MWExZWIzMWExOGNkY2JiZjM5ZWQ1N2QyNzE4NjJkMDOQlR/2: 00:17:53.570 20:17:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.570 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.570 20:17:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:17:53.570 20:17:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.570 20:17:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.570 20:17:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.570 20:17:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:53.570 20:17:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:53.570 20:17:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:53.828 20:17:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:17:53.828 20:17:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:53.828 20:17:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:53.828 20:17:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:53.828 20:17:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:53.828 20:17:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:53.828 20:17:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-key key3 00:17:53.828 20:17:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.828 20:17:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.828 20:17:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.828 20:17:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:53.828 20:17:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:54.086 00:17:54.086 20:17:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:54.086 20:17:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:54.086 20:17:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.344 20:17:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.344 20:17:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:54.344 20:17:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.344 20:17:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.344 20:17:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.344 20:17:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:54.344 { 00:17:54.344 "auth": { 00:17:54.344 "dhgroup": "ffdhe3072", 00:17:54.344 "digest": "sha384", 00:17:54.344 "state": "completed" 00:17:54.344 }, 00:17:54.344 "cntlid": 71, 00:17:54.344 "listen_address": { 00:17:54.344 "adrfam": "IPv4", 00:17:54.344 "traddr": "10.0.0.2", 00:17:54.344 "trsvcid": "4420", 00:17:54.344 "trtype": "TCP" 00:17:54.344 }, 00:17:54.344 "peer_address": { 00:17:54.344 "adrfam": "IPv4", 00:17:54.344 "traddr": "10.0.0.1", 00:17:54.344 "trsvcid": "41260", 00:17:54.344 "trtype": "TCP" 00:17:54.344 }, 00:17:54.344 "qid": 0, 00:17:54.344 "state": "enabled" 00:17:54.344 } 00:17:54.344 ]' 00:17:54.344 20:17:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:54.344 20:17:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:54.344 20:17:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:54.344 20:17:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:54.344 20:17:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:54.601 20:17:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:54.601 20:17:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:54.601 20:17:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.858 20:17:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-secret DHHC-1:03:MGY2MzI3MDIzM2RkNzVlN2I5YWQ3NWNhZjFhOWE1NjYzYmNjMDY1OTJhNjUxNDJiZWIyNjkxNmRmNTMxZDZiNSX1tGc=: 00:17:55.423 20:17:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:55.423 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:55.423 20:17:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:17:55.423 20:17:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.423 20:17:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.423 20:17:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.423 20:17:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:55.423 20:17:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:55.423 20:17:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:55.424 20:17:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:55.682 20:17:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:17:55.682 20:17:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:55.682 20:17:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:55.682 20:17:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:55.682 20:17:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:55.682 20:17:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:55.682 20:17:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:55.682 20:17:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.682 20:17:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.682 20:17:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.682 20:17:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:55.682 20:17:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:56.247 00:17:56.247 20:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:56.247 20:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:56.247 20:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.506 20:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.506 20:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:56.506 20:17:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.506 20:17:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.506 20:17:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.506 20:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:56.506 { 00:17:56.506 "auth": { 00:17:56.506 "dhgroup": "ffdhe4096", 00:17:56.506 "digest": "sha384", 00:17:56.506 "state": "completed" 00:17:56.506 }, 00:17:56.506 "cntlid": 73, 00:17:56.506 "listen_address": { 00:17:56.506 "adrfam": "IPv4", 00:17:56.506 "traddr": "10.0.0.2", 00:17:56.506 "trsvcid": "4420", 00:17:56.506 "trtype": "TCP" 00:17:56.506 }, 00:17:56.506 "peer_address": { 00:17:56.506 "adrfam": "IPv4", 00:17:56.506 "traddr": "10.0.0.1", 00:17:56.506 "trsvcid": "41280", 00:17:56.506 "trtype": "TCP" 00:17:56.506 }, 00:17:56.506 "qid": 0, 00:17:56.506 "state": "enabled" 00:17:56.506 } 00:17:56.506 ]' 00:17:56.506 20:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:56.506 20:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:56.506 20:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:56.506 20:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:56.506 20:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:56.506 20:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:56.506 20:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.506 20:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.764 20:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-secret DHHC-1:00:YzY2ZjJhNmJmZDliZTE5YTk0MDZmNWRlMjY2ZmM1NWZhODkzYzExZjUwMmY2NGVhRC7lxQ==: --dhchap-ctrl-secret DHHC-1:03:MTcyZmVlMGE2MjM3MThmYTJiMzUyYjJkYzg2OThmOWI4MTAzNTlkYjU5Njk2YTM3M2E5M2M0MWMzN2NjMTU2YnO3Ax0=: 00:17:57.330 20:17:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:57.330 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:57.330 20:17:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:17:57.330 20:17:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.330 20:17:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.330 20:17:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.330 20:17:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:57.330 20:17:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:57.330 20:17:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:57.588 20:17:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:17:57.588 20:17:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:57.588 20:17:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:57.588 20:17:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:57.588 20:17:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:57.588 20:17:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:57.588 20:17:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:57.588 20:17:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.588 20:17:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.588 20:17:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.588 20:17:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:57.588 20:17:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.155 00:17:58.155 20:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:58.155 20:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:58.155 20:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.414 20:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.414 20:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:58.414 20:17:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.414 20:17:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.414 20:17:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.414 20:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:58.414 { 00:17:58.414 "auth": { 00:17:58.414 "dhgroup": "ffdhe4096", 00:17:58.414 "digest": "sha384", 00:17:58.414 "state": "completed" 00:17:58.414 }, 00:17:58.414 "cntlid": 75, 00:17:58.414 "listen_address": { 00:17:58.414 "adrfam": "IPv4", 00:17:58.414 "traddr": "10.0.0.2", 00:17:58.414 "trsvcid": "4420", 00:17:58.414 "trtype": "TCP" 00:17:58.414 }, 00:17:58.414 "peer_address": { 00:17:58.414 "adrfam": "IPv4", 00:17:58.414 "traddr": "10.0.0.1", 00:17:58.414 "trsvcid": "47668", 00:17:58.414 "trtype": "TCP" 00:17:58.414 }, 00:17:58.414 "qid": 0, 00:17:58.414 "state": "enabled" 00:17:58.414 } 00:17:58.414 ]' 00:17:58.414 20:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:58.414 20:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:58.414 20:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:58.414 20:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:58.414 20:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:58.414 20:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:58.414 20:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:58.414 20:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:58.673 20:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-secret DHHC-1:01:YjIwMzgwODAzNWNmYTNlNmUzNDE3ZGFmMTQzOWViODGfNR8X: --dhchap-ctrl-secret DHHC-1:02:YjBlNDY1NzgzMWYwY2ViOGZhNWFlOWFkNjAwMzk2YTA4NTkzNTI5NTljZGE3NDYyV+vFWA==: 00:17:59.241 20:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:59.241 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:59.241 20:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:17:59.241 20:17:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.241 20:17:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.241 20:17:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.241 20:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:59.241 20:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:59.241 20:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:59.528 20:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:17:59.528 20:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:59.528 20:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:59.528 20:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:59.528 20:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:59.528 20:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:59.528 20:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:59.528 20:17:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.528 20:17:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.528 20:17:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.528 20:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:59.528 20:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:59.786 00:17:59.786 20:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:59.787 20:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:59.787 20:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:00.045 20:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.045 20:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:00.045 20:17:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.045 20:17:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.045 20:17:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.045 20:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:00.045 { 00:18:00.045 "auth": { 00:18:00.045 "dhgroup": "ffdhe4096", 00:18:00.045 "digest": "sha384", 00:18:00.045 "state": "completed" 00:18:00.045 }, 00:18:00.045 "cntlid": 77, 00:18:00.045 "listen_address": { 00:18:00.045 "adrfam": "IPv4", 00:18:00.045 "traddr": "10.0.0.2", 00:18:00.045 "trsvcid": "4420", 00:18:00.045 "trtype": "TCP" 00:18:00.045 }, 00:18:00.045 "peer_address": { 00:18:00.045 "adrfam": "IPv4", 00:18:00.045 "traddr": "10.0.0.1", 00:18:00.045 "trsvcid": "47708", 00:18:00.045 "trtype": "TCP" 00:18:00.045 }, 00:18:00.045 "qid": 0, 00:18:00.045 "state": "enabled" 00:18:00.045 } 00:18:00.045 ]' 00:18:00.045 20:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:00.302 20:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:00.302 20:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:00.302 20:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:00.302 20:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:00.302 20:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:00.302 20:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:00.302 20:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:00.560 20:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-secret DHHC-1:02:OWMzOGI4NDQ4ZjY3NWY2OWQ5ZmI3ZmRkMjQyYTQzMTQ3MzdiZGRhZDBhNzgyNWU1YWhG9g==: --dhchap-ctrl-secret DHHC-1:01:MWExZWIzMWExOGNkY2JiZjM5ZWQ1N2QyNzE4NjJkMDOQlR/2: 00:18:01.125 20:17:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:01.125 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:01.125 20:17:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:18:01.125 20:17:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.125 20:17:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.125 20:17:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.125 20:17:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:01.125 20:17:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:01.125 20:17:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:01.383 20:17:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:18:01.383 20:17:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:01.383 20:17:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:01.383 20:17:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:01.383 20:17:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:01.383 20:17:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:01.383 20:17:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-key key3 00:18:01.383 20:17:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.383 20:17:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.383 20:17:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.383 20:17:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:01.383 20:17:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:01.947 00:18:01.947 20:17:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:01.947 20:17:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:01.947 20:17:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.204 20:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.204 20:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:02.204 20:17:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.204 20:17:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.204 20:17:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.204 20:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:02.204 { 00:18:02.204 "auth": { 00:18:02.204 "dhgroup": "ffdhe4096", 00:18:02.204 "digest": "sha384", 00:18:02.204 "state": "completed" 00:18:02.204 }, 00:18:02.204 "cntlid": 79, 00:18:02.204 "listen_address": { 00:18:02.204 "adrfam": "IPv4", 00:18:02.204 "traddr": "10.0.0.2", 00:18:02.204 "trsvcid": "4420", 00:18:02.204 "trtype": "TCP" 00:18:02.204 }, 00:18:02.204 "peer_address": { 00:18:02.204 "adrfam": "IPv4", 00:18:02.204 "traddr": "10.0.0.1", 00:18:02.204 "trsvcid": "47730", 00:18:02.204 "trtype": "TCP" 00:18:02.204 }, 00:18:02.204 "qid": 0, 00:18:02.204 "state": "enabled" 00:18:02.204 } 00:18:02.204 ]' 00:18:02.204 20:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:02.204 20:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:02.204 20:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:02.204 20:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:02.204 20:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:02.204 20:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:02.204 20:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:02.204 20:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:02.771 20:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-secret DHHC-1:03:MGY2MzI3MDIzM2RkNzVlN2I5YWQ3NWNhZjFhOWE1NjYzYmNjMDY1OTJhNjUxNDJiZWIyNjkxNmRmNTMxZDZiNSX1tGc=: 00:18:03.339 20:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:03.339 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:03.339 20:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:18:03.339 20:17:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.339 20:17:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.339 20:17:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.339 20:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:03.339 20:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:03.339 20:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:03.339 20:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:03.339 20:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:18:03.339 20:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:03.339 20:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:03.339 20:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:03.339 20:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:03.339 20:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:03.339 20:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:03.339 20:17:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.339 20:17:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.598 20:17:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.598 20:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:03.598 20:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:03.858 00:18:03.858 20:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:03.858 20:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:03.858 20:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.117 20:17:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.117 20:17:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:04.117 20:17:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.117 20:17:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.375 20:17:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.375 20:17:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:04.375 { 00:18:04.375 "auth": { 00:18:04.375 "dhgroup": "ffdhe6144", 00:18:04.375 "digest": "sha384", 00:18:04.375 "state": "completed" 00:18:04.375 }, 00:18:04.375 "cntlid": 81, 00:18:04.375 "listen_address": { 00:18:04.375 "adrfam": "IPv4", 00:18:04.375 "traddr": "10.0.0.2", 00:18:04.375 "trsvcid": "4420", 00:18:04.375 "trtype": "TCP" 00:18:04.375 }, 00:18:04.375 "peer_address": { 00:18:04.375 "adrfam": "IPv4", 00:18:04.375 "traddr": "10.0.0.1", 00:18:04.375 "trsvcid": "47760", 00:18:04.375 "trtype": "TCP" 00:18:04.375 }, 00:18:04.375 "qid": 0, 00:18:04.375 "state": "enabled" 00:18:04.375 } 00:18:04.375 ]' 00:18:04.375 20:17:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:04.375 20:17:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:04.375 20:17:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:04.375 20:17:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:04.375 20:17:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:04.375 20:17:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:04.375 20:17:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:04.375 20:17:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.633 20:17:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-secret DHHC-1:00:YzY2ZjJhNmJmZDliZTE5YTk0MDZmNWRlMjY2ZmM1NWZhODkzYzExZjUwMmY2NGVhRC7lxQ==: --dhchap-ctrl-secret DHHC-1:03:MTcyZmVlMGE2MjM3MThmYTJiMzUyYjJkYzg2OThmOWI4MTAzNTlkYjU5Njk2YTM3M2E5M2M0MWMzN2NjMTU2YnO3Ax0=: 00:18:05.569 20:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:05.570 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:05.570 20:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:18:05.570 20:17:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.570 20:17:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.570 20:17:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.570 20:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:05.570 20:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:05.570 20:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:05.570 20:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:18:05.570 20:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:05.570 20:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:05.570 20:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:05.570 20:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:05.570 20:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:05.570 20:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.570 20:17:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.570 20:17:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.570 20:17:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.570 20:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.570 20:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:06.136 00:18:06.136 20:17:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:06.136 20:17:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.136 20:17:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:06.394 20:17:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.394 20:17:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.395 20:17:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.395 20:17:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.395 20:17:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.395 20:17:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:06.395 { 00:18:06.395 "auth": { 00:18:06.395 "dhgroup": "ffdhe6144", 00:18:06.395 "digest": "sha384", 00:18:06.395 "state": "completed" 00:18:06.395 }, 00:18:06.395 "cntlid": 83, 00:18:06.395 "listen_address": { 00:18:06.395 "adrfam": "IPv4", 00:18:06.395 "traddr": "10.0.0.2", 00:18:06.395 "trsvcid": "4420", 00:18:06.395 "trtype": "TCP" 00:18:06.395 }, 00:18:06.395 "peer_address": { 00:18:06.395 "adrfam": "IPv4", 00:18:06.395 "traddr": "10.0.0.1", 00:18:06.395 "trsvcid": "47798", 00:18:06.395 "trtype": "TCP" 00:18:06.395 }, 00:18:06.395 "qid": 0, 00:18:06.395 "state": "enabled" 00:18:06.395 } 00:18:06.395 ]' 00:18:06.395 20:17:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:06.653 20:17:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:06.653 20:17:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:06.653 20:17:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:06.653 20:17:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:06.653 20:17:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:06.653 20:17:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:06.653 20:17:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:06.911 20:17:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-secret DHHC-1:01:YjIwMzgwODAzNWNmYTNlNmUzNDE3ZGFmMTQzOWViODGfNR8X: --dhchap-ctrl-secret DHHC-1:02:YjBlNDY1NzgzMWYwY2ViOGZhNWFlOWFkNjAwMzk2YTA4NTkzNTI5NTljZGE3NDYyV+vFWA==: 00:18:07.477 20:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:07.477 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:07.477 20:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:18:07.477 20:17:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.478 20:17:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.478 20:17:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.478 20:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:07.478 20:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:07.478 20:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:07.736 20:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:18:07.736 20:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:07.736 20:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:07.736 20:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:07.736 20:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:07.736 20:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:07.736 20:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:07.736 20:17:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.736 20:17:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.736 20:17:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.736 20:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:07.736 20:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:08.338 00:18:08.338 20:17:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:08.338 20:17:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:08.338 20:17:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:08.598 20:17:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.598 20:17:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:08.598 20:17:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.598 20:17:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.598 20:17:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.598 20:17:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:08.598 { 00:18:08.598 "auth": { 00:18:08.598 "dhgroup": "ffdhe6144", 00:18:08.598 "digest": "sha384", 00:18:08.598 "state": "completed" 00:18:08.598 }, 00:18:08.598 "cntlid": 85, 00:18:08.598 "listen_address": { 00:18:08.598 "adrfam": "IPv4", 00:18:08.598 "traddr": "10.0.0.2", 00:18:08.598 "trsvcid": "4420", 00:18:08.598 "trtype": "TCP" 00:18:08.598 }, 00:18:08.598 "peer_address": { 00:18:08.598 "adrfam": "IPv4", 00:18:08.598 "traddr": "10.0.0.1", 00:18:08.598 "trsvcid": "55214", 00:18:08.598 "trtype": "TCP" 00:18:08.598 }, 00:18:08.598 "qid": 0, 00:18:08.598 "state": "enabled" 00:18:08.598 } 00:18:08.598 ]' 00:18:08.598 20:17:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:08.598 20:17:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:08.598 20:17:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:08.598 20:17:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:08.598 20:17:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:08.598 20:17:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:08.598 20:17:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:08.598 20:17:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:08.857 20:17:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-secret DHHC-1:02:OWMzOGI4NDQ4ZjY3NWY2OWQ5ZmI3ZmRkMjQyYTQzMTQ3MzdiZGRhZDBhNzgyNWU1YWhG9g==: --dhchap-ctrl-secret DHHC-1:01:MWExZWIzMWExOGNkY2JiZjM5ZWQ1N2QyNzE4NjJkMDOQlR/2: 00:18:09.424 20:17:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:09.424 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:09.424 20:17:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:18:09.424 20:17:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.424 20:17:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.424 20:17:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.424 20:17:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:09.424 20:17:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:09.424 20:17:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:09.683 20:17:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:18:09.683 20:17:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:09.683 20:17:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:09.683 20:17:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:09.683 20:17:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:09.683 20:17:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:09.683 20:17:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-key key3 00:18:09.683 20:17:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.683 20:17:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.683 20:17:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.683 20:17:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:09.683 20:17:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:10.250 00:18:10.250 20:17:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:10.250 20:17:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:10.250 20:17:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:10.508 20:17:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.508 20:17:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:10.508 20:17:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.508 20:17:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.508 20:17:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.508 20:17:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:10.508 { 00:18:10.508 "auth": { 00:18:10.508 "dhgroup": "ffdhe6144", 00:18:10.508 "digest": "sha384", 00:18:10.508 "state": "completed" 00:18:10.508 }, 00:18:10.508 "cntlid": 87, 00:18:10.508 "listen_address": { 00:18:10.508 "adrfam": "IPv4", 00:18:10.508 "traddr": "10.0.0.2", 00:18:10.508 "trsvcid": "4420", 00:18:10.508 "trtype": "TCP" 00:18:10.508 }, 00:18:10.508 "peer_address": { 00:18:10.508 "adrfam": "IPv4", 00:18:10.508 "traddr": "10.0.0.1", 00:18:10.508 "trsvcid": "55250", 00:18:10.508 "trtype": "TCP" 00:18:10.508 }, 00:18:10.508 "qid": 0, 00:18:10.508 "state": "enabled" 00:18:10.508 } 00:18:10.508 ]' 00:18:10.508 20:17:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:10.508 20:17:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:10.508 20:17:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:10.767 20:17:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:10.767 20:17:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:10.767 20:17:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:10.767 20:17:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:10.767 20:17:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:11.026 20:17:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-secret DHHC-1:03:MGY2MzI3MDIzM2RkNzVlN2I5YWQ3NWNhZjFhOWE1NjYzYmNjMDY1OTJhNjUxNDJiZWIyNjkxNmRmNTMxZDZiNSX1tGc=: 00:18:11.594 20:18:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:11.594 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:11.594 20:18:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:18:11.594 20:18:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.594 20:18:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.594 20:18:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.594 20:18:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:11.594 20:18:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:11.594 20:18:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:11.594 20:18:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:11.853 20:18:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:18:11.853 20:18:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:11.853 20:18:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:11.853 20:18:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:11.853 20:18:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:11.853 20:18:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:11.853 20:18:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:11.853 20:18:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.853 20:18:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.853 20:18:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.853 20:18:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:11.853 20:18:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:12.787 00:18:12.787 20:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:12.787 20:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:12.788 20:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.788 20:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.788 20:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:12.788 20:18:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.788 20:18:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.788 20:18:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.788 20:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:12.788 { 00:18:12.788 "auth": { 00:18:12.788 "dhgroup": "ffdhe8192", 00:18:12.788 "digest": "sha384", 00:18:12.788 "state": "completed" 00:18:12.788 }, 00:18:12.788 "cntlid": 89, 00:18:12.788 "listen_address": { 00:18:12.788 "adrfam": "IPv4", 00:18:12.788 "traddr": "10.0.0.2", 00:18:12.788 "trsvcid": "4420", 00:18:12.788 "trtype": "TCP" 00:18:12.788 }, 00:18:12.788 "peer_address": { 00:18:12.788 "adrfam": "IPv4", 00:18:12.788 "traddr": "10.0.0.1", 00:18:12.788 "trsvcid": "55286", 00:18:12.788 "trtype": "TCP" 00:18:12.788 }, 00:18:12.788 "qid": 0, 00:18:12.788 "state": "enabled" 00:18:12.788 } 00:18:12.788 ]' 00:18:12.788 20:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:13.046 20:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:13.046 20:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:13.046 20:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:13.046 20:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:13.046 20:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:13.046 20:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:13.046 20:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:13.304 20:18:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-secret DHHC-1:00:YzY2ZjJhNmJmZDliZTE5YTk0MDZmNWRlMjY2ZmM1NWZhODkzYzExZjUwMmY2NGVhRC7lxQ==: --dhchap-ctrl-secret DHHC-1:03:MTcyZmVlMGE2MjM3MThmYTJiMzUyYjJkYzg2OThmOWI4MTAzNTlkYjU5Njk2YTM3M2E5M2M0MWMzN2NjMTU2YnO3Ax0=: 00:18:13.872 20:18:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:13.872 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:13.872 20:18:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:18:13.872 20:18:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.872 20:18:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.130 20:18:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.130 20:18:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:14.130 20:18:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:14.130 20:18:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:14.388 20:18:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:18:14.388 20:18:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:14.388 20:18:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:14.388 20:18:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:14.388 20:18:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:14.388 20:18:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:14.388 20:18:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:14.388 20:18:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.388 20:18:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.388 20:18:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.388 20:18:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:14.388 20:18:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:14.953 00:18:14.953 20:18:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:14.953 20:18:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:14.953 20:18:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:15.211 20:18:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.212 20:18:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:15.212 20:18:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.212 20:18:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.212 20:18:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.212 20:18:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:15.212 { 00:18:15.212 "auth": { 00:18:15.212 "dhgroup": "ffdhe8192", 00:18:15.212 "digest": "sha384", 00:18:15.212 "state": "completed" 00:18:15.212 }, 00:18:15.212 "cntlid": 91, 00:18:15.212 "listen_address": { 00:18:15.212 "adrfam": "IPv4", 00:18:15.212 "traddr": "10.0.0.2", 00:18:15.212 "trsvcid": "4420", 00:18:15.212 "trtype": "TCP" 00:18:15.212 }, 00:18:15.212 "peer_address": { 00:18:15.212 "adrfam": "IPv4", 00:18:15.212 "traddr": "10.0.0.1", 00:18:15.212 "trsvcid": "55318", 00:18:15.212 "trtype": "TCP" 00:18:15.212 }, 00:18:15.212 "qid": 0, 00:18:15.212 "state": "enabled" 00:18:15.212 } 00:18:15.212 ]' 00:18:15.212 20:18:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:15.212 20:18:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:15.212 20:18:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:15.212 20:18:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:15.212 20:18:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:15.212 20:18:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:15.212 20:18:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:15.212 20:18:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:15.470 20:18:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-secret DHHC-1:01:YjIwMzgwODAzNWNmYTNlNmUzNDE3ZGFmMTQzOWViODGfNR8X: --dhchap-ctrl-secret DHHC-1:02:YjBlNDY1NzgzMWYwY2ViOGZhNWFlOWFkNjAwMzk2YTA4NTkzNTI5NTljZGE3NDYyV+vFWA==: 00:18:16.037 20:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:16.037 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:16.037 20:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:18:16.037 20:18:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.037 20:18:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.037 20:18:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.037 20:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:16.037 20:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:16.037 20:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:16.296 20:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:18:16.296 20:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:16.296 20:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:16.296 20:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:16.296 20:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:16.296 20:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:16.296 20:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:16.296 20:18:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.296 20:18:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.296 20:18:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.296 20:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:16.296 20:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:16.863 00:18:16.863 20:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:16.863 20:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:16.863 20:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:17.430 20:18:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.430 20:18:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:17.430 20:18:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.430 20:18:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.430 20:18:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.430 20:18:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:17.430 { 00:18:17.430 "auth": { 00:18:17.430 "dhgroup": "ffdhe8192", 00:18:17.430 "digest": "sha384", 00:18:17.430 "state": "completed" 00:18:17.430 }, 00:18:17.430 "cntlid": 93, 00:18:17.430 "listen_address": { 00:18:17.430 "adrfam": "IPv4", 00:18:17.430 "traddr": "10.0.0.2", 00:18:17.430 "trsvcid": "4420", 00:18:17.430 "trtype": "TCP" 00:18:17.430 }, 00:18:17.430 "peer_address": { 00:18:17.430 "adrfam": "IPv4", 00:18:17.430 "traddr": "10.0.0.1", 00:18:17.430 "trsvcid": "55342", 00:18:17.430 "trtype": "TCP" 00:18:17.430 }, 00:18:17.430 "qid": 0, 00:18:17.430 "state": "enabled" 00:18:17.430 } 00:18:17.430 ]' 00:18:17.430 20:18:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:17.430 20:18:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:17.430 20:18:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:17.430 20:18:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:17.430 20:18:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:17.430 20:18:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:17.430 20:18:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:17.430 20:18:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:17.689 20:18:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-secret DHHC-1:02:OWMzOGI4NDQ4ZjY3NWY2OWQ5ZmI3ZmRkMjQyYTQzMTQ3MzdiZGRhZDBhNzgyNWU1YWhG9g==: --dhchap-ctrl-secret DHHC-1:01:MWExZWIzMWExOGNkY2JiZjM5ZWQ1N2QyNzE4NjJkMDOQlR/2: 00:18:18.624 20:18:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:18.624 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:18.624 20:18:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:18:18.624 20:18:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.624 20:18:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.624 20:18:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.624 20:18:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:18.625 20:18:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:18.625 20:18:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:18.883 20:18:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:18:18.883 20:18:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:18.883 20:18:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:18.883 20:18:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:18.883 20:18:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:18.883 20:18:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:18.883 20:18:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-key key3 00:18:18.883 20:18:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.883 20:18:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.883 20:18:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.883 20:18:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:18.883 20:18:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:19.450 00:18:19.450 20:18:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:19.450 20:18:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:19.450 20:18:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:19.709 20:18:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.709 20:18:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:19.709 20:18:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.709 20:18:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.709 20:18:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.709 20:18:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:19.709 { 00:18:19.709 "auth": { 00:18:19.709 "dhgroup": "ffdhe8192", 00:18:19.709 "digest": "sha384", 00:18:19.709 "state": "completed" 00:18:19.709 }, 00:18:19.709 "cntlid": 95, 00:18:19.709 "listen_address": { 00:18:19.709 "adrfam": "IPv4", 00:18:19.709 "traddr": "10.0.0.2", 00:18:19.709 "trsvcid": "4420", 00:18:19.709 "trtype": "TCP" 00:18:19.709 }, 00:18:19.709 "peer_address": { 00:18:19.709 "adrfam": "IPv4", 00:18:19.709 "traddr": "10.0.0.1", 00:18:19.709 "trsvcid": "41358", 00:18:19.709 "trtype": "TCP" 00:18:19.709 }, 00:18:19.709 "qid": 0, 00:18:19.709 "state": "enabled" 00:18:19.709 } 00:18:19.709 ]' 00:18:19.709 20:18:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:19.709 20:18:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:19.709 20:18:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:19.709 20:18:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:19.709 20:18:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:19.968 20:18:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:19.968 20:18:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:19.968 20:18:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:20.228 20:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-secret DHHC-1:03:MGY2MzI3MDIzM2RkNzVlN2I5YWQ3NWNhZjFhOWE1NjYzYmNjMDY1OTJhNjUxNDJiZWIyNjkxNmRmNTMxZDZiNSX1tGc=: 00:18:20.795 20:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:20.795 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:20.795 20:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:18:20.795 20:18:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.795 20:18:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.795 20:18:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.795 20:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:18:20.795 20:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:20.795 20:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:20.795 20:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:20.795 20:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:21.054 20:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:18:21.054 20:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:21.054 20:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:21.054 20:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:21.054 20:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:21.054 20:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:21.054 20:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:21.054 20:18:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.054 20:18:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.054 20:18:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.055 20:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:21.055 20:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:21.313 00:18:21.572 20:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:21.572 20:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:21.572 20:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:21.572 20:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.572 20:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:21.572 20:18:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.573 20:18:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.573 20:18:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.573 20:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:21.573 { 00:18:21.573 "auth": { 00:18:21.573 "dhgroup": "null", 00:18:21.573 "digest": "sha512", 00:18:21.573 "state": "completed" 00:18:21.573 }, 00:18:21.573 "cntlid": 97, 00:18:21.573 "listen_address": { 00:18:21.573 "adrfam": "IPv4", 00:18:21.573 "traddr": "10.0.0.2", 00:18:21.573 "trsvcid": "4420", 00:18:21.573 "trtype": "TCP" 00:18:21.573 }, 00:18:21.573 "peer_address": { 00:18:21.573 "adrfam": "IPv4", 00:18:21.573 "traddr": "10.0.0.1", 00:18:21.573 "trsvcid": "41378", 00:18:21.573 "trtype": "TCP" 00:18:21.573 }, 00:18:21.573 "qid": 0, 00:18:21.573 "state": "enabled" 00:18:21.573 } 00:18:21.573 ]' 00:18:21.573 20:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:21.832 20:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:21.832 20:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:21.832 20:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:21.832 20:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:21.832 20:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:21.832 20:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:21.832 20:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:22.091 20:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-secret DHHC-1:00:YzY2ZjJhNmJmZDliZTE5YTk0MDZmNWRlMjY2ZmM1NWZhODkzYzExZjUwMmY2NGVhRC7lxQ==: --dhchap-ctrl-secret DHHC-1:03:MTcyZmVlMGE2MjM3MThmYTJiMzUyYjJkYzg2OThmOWI4MTAzNTlkYjU5Njk2YTM3M2E5M2M0MWMzN2NjMTU2YnO3Ax0=: 00:18:22.659 20:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:22.659 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:22.659 20:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:18:22.659 20:18:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.659 20:18:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.659 20:18:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.659 20:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:22.659 20:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:22.659 20:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:22.919 20:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:18:22.919 20:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:22.919 20:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:22.919 20:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:22.919 20:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:22.919 20:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:22.919 20:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:22.919 20:18:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.919 20:18:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.919 20:18:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.919 20:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:22.919 20:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:23.178 00:18:23.178 20:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:23.178 20:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:23.178 20:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:23.437 20:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.437 20:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:23.437 20:18:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.437 20:18:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.437 20:18:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.437 20:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:23.437 { 00:18:23.437 "auth": { 00:18:23.437 "dhgroup": "null", 00:18:23.437 "digest": "sha512", 00:18:23.437 "state": "completed" 00:18:23.437 }, 00:18:23.437 "cntlid": 99, 00:18:23.437 "listen_address": { 00:18:23.437 "adrfam": "IPv4", 00:18:23.437 "traddr": "10.0.0.2", 00:18:23.437 "trsvcid": "4420", 00:18:23.437 "trtype": "TCP" 00:18:23.437 }, 00:18:23.437 "peer_address": { 00:18:23.437 "adrfam": "IPv4", 00:18:23.437 "traddr": "10.0.0.1", 00:18:23.437 "trsvcid": "41414", 00:18:23.437 "trtype": "TCP" 00:18:23.437 }, 00:18:23.437 "qid": 0, 00:18:23.437 "state": "enabled" 00:18:23.437 } 00:18:23.437 ]' 00:18:23.437 20:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:23.695 20:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:23.695 20:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:23.695 20:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:23.695 20:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:23.695 20:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:23.695 20:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:23.695 20:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:23.954 20:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-secret DHHC-1:01:YjIwMzgwODAzNWNmYTNlNmUzNDE3ZGFmMTQzOWViODGfNR8X: --dhchap-ctrl-secret DHHC-1:02:YjBlNDY1NzgzMWYwY2ViOGZhNWFlOWFkNjAwMzk2YTA4NTkzNTI5NTljZGE3NDYyV+vFWA==: 00:18:24.520 20:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:24.778 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:24.779 20:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:18:24.779 20:18:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.779 20:18:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.779 20:18:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.779 20:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:24.779 20:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:24.779 20:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:25.037 20:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:18:25.037 20:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:25.037 20:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:25.037 20:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:25.037 20:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:25.037 20:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:25.037 20:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:25.037 20:18:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.037 20:18:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.037 20:18:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.037 20:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:25.037 20:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:25.309 00:18:25.309 20:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:25.309 20:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:25.309 20:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:25.581 20:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.581 20:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:25.581 20:18:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.581 20:18:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.581 20:18:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.581 20:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:25.581 { 00:18:25.581 "auth": { 00:18:25.581 "dhgroup": "null", 00:18:25.581 "digest": "sha512", 00:18:25.581 "state": "completed" 00:18:25.581 }, 00:18:25.581 "cntlid": 101, 00:18:25.581 "listen_address": { 00:18:25.581 "adrfam": "IPv4", 00:18:25.581 "traddr": "10.0.0.2", 00:18:25.581 "trsvcid": "4420", 00:18:25.581 "trtype": "TCP" 00:18:25.581 }, 00:18:25.581 "peer_address": { 00:18:25.581 "adrfam": "IPv4", 00:18:25.581 "traddr": "10.0.0.1", 00:18:25.581 "trsvcid": "41432", 00:18:25.581 "trtype": "TCP" 00:18:25.581 }, 00:18:25.581 "qid": 0, 00:18:25.581 "state": "enabled" 00:18:25.581 } 00:18:25.581 ]' 00:18:25.581 20:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:25.581 20:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:25.581 20:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:25.581 20:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:25.581 20:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:25.581 20:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:25.581 20:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:25.581 20:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:25.839 20:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-secret DHHC-1:02:OWMzOGI4NDQ4ZjY3NWY2OWQ5ZmI3ZmRkMjQyYTQzMTQ3MzdiZGRhZDBhNzgyNWU1YWhG9g==: --dhchap-ctrl-secret DHHC-1:01:MWExZWIzMWExOGNkY2JiZjM5ZWQ1N2QyNzE4NjJkMDOQlR/2: 00:18:26.406 20:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:26.406 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:26.406 20:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:18:26.406 20:18:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.406 20:18:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.665 20:18:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.665 20:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:26.665 20:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:26.665 20:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:26.924 20:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:18:26.924 20:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:26.924 20:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:26.924 20:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:26.924 20:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:26.924 20:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:26.924 20:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-key key3 00:18:26.924 20:18:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.924 20:18:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.924 20:18:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.924 20:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:26.924 20:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:27.183 00:18:27.183 20:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:27.183 20:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:27.183 20:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:27.442 20:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.442 20:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:27.442 20:18:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.442 20:18:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.442 20:18:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.442 20:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:27.442 { 00:18:27.442 "auth": { 00:18:27.442 "dhgroup": "null", 00:18:27.442 "digest": "sha512", 00:18:27.442 "state": "completed" 00:18:27.442 }, 00:18:27.442 "cntlid": 103, 00:18:27.442 "listen_address": { 00:18:27.442 "adrfam": "IPv4", 00:18:27.442 "traddr": "10.0.0.2", 00:18:27.442 "trsvcid": "4420", 00:18:27.442 "trtype": "TCP" 00:18:27.442 }, 00:18:27.442 "peer_address": { 00:18:27.442 "adrfam": "IPv4", 00:18:27.442 "traddr": "10.0.0.1", 00:18:27.442 "trsvcid": "37850", 00:18:27.442 "trtype": "TCP" 00:18:27.442 }, 00:18:27.442 "qid": 0, 00:18:27.442 "state": "enabled" 00:18:27.442 } 00:18:27.442 ]' 00:18:27.442 20:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:27.442 20:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:27.442 20:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:27.442 20:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:27.442 20:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:27.442 20:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:27.442 20:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:27.442 20:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:27.700 20:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-secret DHHC-1:03:MGY2MzI3MDIzM2RkNzVlN2I5YWQ3NWNhZjFhOWE1NjYzYmNjMDY1OTJhNjUxNDJiZWIyNjkxNmRmNTMxZDZiNSX1tGc=: 00:18:28.267 20:18:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:28.267 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:28.267 20:18:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:18:28.267 20:18:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.267 20:18:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.267 20:18:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.267 20:18:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:28.267 20:18:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:28.267 20:18:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:28.267 20:18:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:28.525 20:18:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:18:28.525 20:18:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:28.525 20:18:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:28.525 20:18:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:28.525 20:18:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:28.525 20:18:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:28.525 20:18:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:28.525 20:18:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.525 20:18:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.525 20:18:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.525 20:18:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:28.526 20:18:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:29.092 00:18:29.092 20:18:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:29.092 20:18:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:29.092 20:18:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:29.351 20:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.351 20:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:29.351 20:18:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.351 20:18:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.351 20:18:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.351 20:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:29.351 { 00:18:29.351 "auth": { 00:18:29.351 "dhgroup": "ffdhe2048", 00:18:29.351 "digest": "sha512", 00:18:29.351 "state": "completed" 00:18:29.351 }, 00:18:29.351 "cntlid": 105, 00:18:29.351 "listen_address": { 00:18:29.351 "adrfam": "IPv4", 00:18:29.351 "traddr": "10.0.0.2", 00:18:29.351 "trsvcid": "4420", 00:18:29.351 "trtype": "TCP" 00:18:29.351 }, 00:18:29.351 "peer_address": { 00:18:29.351 "adrfam": "IPv4", 00:18:29.351 "traddr": "10.0.0.1", 00:18:29.351 "trsvcid": "37886", 00:18:29.351 "trtype": "TCP" 00:18:29.351 }, 00:18:29.351 "qid": 0, 00:18:29.351 "state": "enabled" 00:18:29.351 } 00:18:29.351 ]' 00:18:29.351 20:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:29.351 20:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:29.351 20:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:29.351 20:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:29.351 20:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:29.351 20:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:29.351 20:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:29.351 20:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:29.609 20:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-secret DHHC-1:00:YzY2ZjJhNmJmZDliZTE5YTk0MDZmNWRlMjY2ZmM1NWZhODkzYzExZjUwMmY2NGVhRC7lxQ==: --dhchap-ctrl-secret DHHC-1:03:MTcyZmVlMGE2MjM3MThmYTJiMzUyYjJkYzg2OThmOWI4MTAzNTlkYjU5Njk2YTM3M2E5M2M0MWMzN2NjMTU2YnO3Ax0=: 00:18:30.544 20:18:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:30.544 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:30.544 20:18:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:18:30.544 20:18:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.544 20:18:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.544 20:18:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.544 20:18:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:30.544 20:18:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:30.544 20:18:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:30.544 20:18:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:18:30.544 20:18:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:30.544 20:18:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:30.544 20:18:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:30.544 20:18:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:30.544 20:18:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:30.544 20:18:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:30.544 20:18:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.544 20:18:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.544 20:18:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.544 20:18:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:30.544 20:18:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:31.112 00:18:31.112 20:18:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:31.112 20:18:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:31.112 20:18:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:31.112 20:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.112 20:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:31.112 20:18:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.112 20:18:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.112 20:18:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.112 20:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:31.112 { 00:18:31.112 "auth": { 00:18:31.112 "dhgroup": "ffdhe2048", 00:18:31.112 "digest": "sha512", 00:18:31.112 "state": "completed" 00:18:31.112 }, 00:18:31.112 "cntlid": 107, 00:18:31.112 "listen_address": { 00:18:31.112 "adrfam": "IPv4", 00:18:31.112 "traddr": "10.0.0.2", 00:18:31.112 "trsvcid": "4420", 00:18:31.112 "trtype": "TCP" 00:18:31.112 }, 00:18:31.112 "peer_address": { 00:18:31.112 "adrfam": "IPv4", 00:18:31.112 "traddr": "10.0.0.1", 00:18:31.112 "trsvcid": "37910", 00:18:31.112 "trtype": "TCP" 00:18:31.112 }, 00:18:31.112 "qid": 0, 00:18:31.112 "state": "enabled" 00:18:31.112 } 00:18:31.112 ]' 00:18:31.112 20:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:31.371 20:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:31.371 20:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:31.371 20:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:31.371 20:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:31.371 20:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:31.371 20:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:31.371 20:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:31.630 20:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-secret DHHC-1:01:YjIwMzgwODAzNWNmYTNlNmUzNDE3ZGFmMTQzOWViODGfNR8X: --dhchap-ctrl-secret DHHC-1:02:YjBlNDY1NzgzMWYwY2ViOGZhNWFlOWFkNjAwMzk2YTA4NTkzNTI5NTljZGE3NDYyV+vFWA==: 00:18:32.197 20:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:32.197 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:32.197 20:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:18:32.197 20:18:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.197 20:18:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.197 20:18:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.197 20:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:32.197 20:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:32.197 20:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:32.455 20:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:18:32.455 20:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:32.455 20:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:32.455 20:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:32.455 20:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:32.455 20:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:32.455 20:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:32.455 20:18:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.455 20:18:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.455 20:18:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.455 20:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:32.455 20:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:32.713 00:18:32.713 20:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:32.713 20:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:32.713 20:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:32.972 20:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.972 20:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:32.972 20:18:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.972 20:18:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.972 20:18:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.972 20:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:32.972 { 00:18:32.972 "auth": { 00:18:32.972 "dhgroup": "ffdhe2048", 00:18:32.972 "digest": "sha512", 00:18:32.972 "state": "completed" 00:18:32.972 }, 00:18:32.972 "cntlid": 109, 00:18:32.972 "listen_address": { 00:18:32.972 "adrfam": "IPv4", 00:18:32.972 "traddr": "10.0.0.2", 00:18:32.972 "trsvcid": "4420", 00:18:32.972 "trtype": "TCP" 00:18:32.972 }, 00:18:32.972 "peer_address": { 00:18:32.972 "adrfam": "IPv4", 00:18:32.972 "traddr": "10.0.0.1", 00:18:32.972 "trsvcid": "37946", 00:18:32.972 "trtype": "TCP" 00:18:32.972 }, 00:18:32.972 "qid": 0, 00:18:32.972 "state": "enabled" 00:18:32.972 } 00:18:32.972 ]' 00:18:32.972 20:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:32.972 20:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:33.231 20:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:33.231 20:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:33.231 20:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:33.231 20:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:33.231 20:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:33.231 20:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:33.490 20:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-secret DHHC-1:02:OWMzOGI4NDQ4ZjY3NWY2OWQ5ZmI3ZmRkMjQyYTQzMTQ3MzdiZGRhZDBhNzgyNWU1YWhG9g==: --dhchap-ctrl-secret DHHC-1:01:MWExZWIzMWExOGNkY2JiZjM5ZWQ1N2QyNzE4NjJkMDOQlR/2: 00:18:34.057 20:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:34.057 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:34.057 20:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:18:34.057 20:18:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.057 20:18:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.057 20:18:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.057 20:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:34.057 20:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:34.057 20:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:34.315 20:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:18:34.315 20:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:34.315 20:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:34.315 20:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:34.315 20:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:34.315 20:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:34.315 20:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-key key3 00:18:34.315 20:18:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.315 20:18:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.315 20:18:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.315 20:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:34.316 20:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:34.575 00:18:34.834 20:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:34.834 20:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:34.834 20:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:34.834 20:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.834 20:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:34.834 20:18:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.834 20:18:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.834 20:18:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.834 20:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:34.834 { 00:18:34.834 "auth": { 00:18:34.834 "dhgroup": "ffdhe2048", 00:18:34.834 "digest": "sha512", 00:18:34.834 "state": "completed" 00:18:34.834 }, 00:18:34.834 "cntlid": 111, 00:18:34.834 "listen_address": { 00:18:34.834 "adrfam": "IPv4", 00:18:34.834 "traddr": "10.0.0.2", 00:18:34.834 "trsvcid": "4420", 00:18:34.834 "trtype": "TCP" 00:18:34.834 }, 00:18:34.834 "peer_address": { 00:18:34.834 "adrfam": "IPv4", 00:18:34.834 "traddr": "10.0.0.1", 00:18:34.834 "trsvcid": "37958", 00:18:34.834 "trtype": "TCP" 00:18:34.834 }, 00:18:34.834 "qid": 0, 00:18:34.834 "state": "enabled" 00:18:34.834 } 00:18:34.834 ]' 00:18:34.834 20:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:35.092 20:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:35.092 20:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:35.092 20:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:35.092 20:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:35.092 20:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:35.092 20:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:35.092 20:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:35.351 20:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-secret DHHC-1:03:MGY2MzI3MDIzM2RkNzVlN2I5YWQ3NWNhZjFhOWE1NjYzYmNjMDY1OTJhNjUxNDJiZWIyNjkxNmRmNTMxZDZiNSX1tGc=: 00:18:35.919 20:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:35.919 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:35.919 20:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:18:35.919 20:18:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.919 20:18:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.919 20:18:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.919 20:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:35.919 20:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:35.919 20:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:35.919 20:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:36.178 20:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:18:36.178 20:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:36.178 20:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:36.178 20:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:36.178 20:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:36.178 20:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:36.178 20:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:36.178 20:18:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.178 20:18:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.178 20:18:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.178 20:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:36.178 20:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:36.744 00:18:36.744 20:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:36.744 20:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.744 20:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:36.744 20:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.744 20:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:36.744 20:18:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.744 20:18:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.744 20:18:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.744 20:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:36.744 { 00:18:36.744 "auth": { 00:18:36.744 "dhgroup": "ffdhe3072", 00:18:36.744 "digest": "sha512", 00:18:36.744 "state": "completed" 00:18:36.744 }, 00:18:36.744 "cntlid": 113, 00:18:36.744 "listen_address": { 00:18:36.744 "adrfam": "IPv4", 00:18:36.744 "traddr": "10.0.0.2", 00:18:36.744 "trsvcid": "4420", 00:18:36.744 "trtype": "TCP" 00:18:36.744 }, 00:18:36.744 "peer_address": { 00:18:36.744 "adrfam": "IPv4", 00:18:36.744 "traddr": "10.0.0.1", 00:18:36.744 "trsvcid": "37996", 00:18:36.744 "trtype": "TCP" 00:18:36.744 }, 00:18:36.744 "qid": 0, 00:18:36.744 "state": "enabled" 00:18:36.744 } 00:18:36.744 ]' 00:18:36.744 20:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:37.003 20:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:37.003 20:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:37.003 20:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:37.003 20:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:37.003 20:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:37.003 20:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:37.003 20:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:37.261 20:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-secret DHHC-1:00:YzY2ZjJhNmJmZDliZTE5YTk0MDZmNWRlMjY2ZmM1NWZhODkzYzExZjUwMmY2NGVhRC7lxQ==: --dhchap-ctrl-secret DHHC-1:03:MTcyZmVlMGE2MjM3MThmYTJiMzUyYjJkYzg2OThmOWI4MTAzNTlkYjU5Njk2YTM3M2E5M2M0MWMzN2NjMTU2YnO3Ax0=: 00:18:37.827 20:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:37.827 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:37.827 20:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:18:37.827 20:18:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.827 20:18:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.827 20:18:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.827 20:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:37.827 20:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:37.827 20:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:38.085 20:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:18:38.085 20:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:38.085 20:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:38.085 20:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:38.085 20:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:38.085 20:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:38.085 20:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:38.085 20:18:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.085 20:18:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.344 20:18:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.344 20:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:38.344 20:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:38.601 00:18:38.601 20:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:38.601 20:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:38.601 20:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:38.859 20:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.859 20:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:38.859 20:18:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.859 20:18:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.859 20:18:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.859 20:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:38.859 { 00:18:38.859 "auth": { 00:18:38.859 "dhgroup": "ffdhe3072", 00:18:38.859 "digest": "sha512", 00:18:38.859 "state": "completed" 00:18:38.859 }, 00:18:38.859 "cntlid": 115, 00:18:38.859 "listen_address": { 00:18:38.859 "adrfam": "IPv4", 00:18:38.859 "traddr": "10.0.0.2", 00:18:38.859 "trsvcid": "4420", 00:18:38.859 "trtype": "TCP" 00:18:38.859 }, 00:18:38.859 "peer_address": { 00:18:38.859 "adrfam": "IPv4", 00:18:38.859 "traddr": "10.0.0.1", 00:18:38.859 "trsvcid": "36224", 00:18:38.859 "trtype": "TCP" 00:18:38.859 }, 00:18:38.859 "qid": 0, 00:18:38.859 "state": "enabled" 00:18:38.859 } 00:18:38.859 ]' 00:18:38.859 20:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:38.859 20:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:38.859 20:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:38.859 20:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:38.859 20:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:38.859 20:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:38.859 20:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:38.859 20:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:39.116 20:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-secret DHHC-1:01:YjIwMzgwODAzNWNmYTNlNmUzNDE3ZGFmMTQzOWViODGfNR8X: --dhchap-ctrl-secret DHHC-1:02:YjBlNDY1NzgzMWYwY2ViOGZhNWFlOWFkNjAwMzk2YTA4NTkzNTI5NTljZGE3NDYyV+vFWA==: 00:18:40.049 20:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:40.049 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:40.049 20:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:18:40.049 20:18:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.049 20:18:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.049 20:18:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.049 20:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:40.049 20:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:40.049 20:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:40.049 20:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:18:40.049 20:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:40.049 20:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:40.049 20:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:40.049 20:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:40.049 20:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:40.049 20:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:40.049 20:18:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.049 20:18:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.049 20:18:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.049 20:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:40.049 20:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:40.614 00:18:40.614 20:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:40.614 20:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:40.614 20:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:40.872 20:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.872 20:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:40.872 20:18:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.872 20:18:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.872 20:18:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.872 20:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:40.872 { 00:18:40.872 "auth": { 00:18:40.872 "dhgroup": "ffdhe3072", 00:18:40.872 "digest": "sha512", 00:18:40.872 "state": "completed" 00:18:40.872 }, 00:18:40.872 "cntlid": 117, 00:18:40.872 "listen_address": { 00:18:40.872 "adrfam": "IPv4", 00:18:40.872 "traddr": "10.0.0.2", 00:18:40.872 "trsvcid": "4420", 00:18:40.872 "trtype": "TCP" 00:18:40.872 }, 00:18:40.872 "peer_address": { 00:18:40.872 "adrfam": "IPv4", 00:18:40.872 "traddr": "10.0.0.1", 00:18:40.872 "trsvcid": "36256", 00:18:40.873 "trtype": "TCP" 00:18:40.873 }, 00:18:40.873 "qid": 0, 00:18:40.873 "state": "enabled" 00:18:40.873 } 00:18:40.873 ]' 00:18:40.873 20:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:40.873 20:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:40.873 20:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:40.873 20:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:40.873 20:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:40.873 20:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:40.873 20:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:40.873 20:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:41.130 20:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-secret DHHC-1:02:OWMzOGI4NDQ4ZjY3NWY2OWQ5ZmI3ZmRkMjQyYTQzMTQ3MzdiZGRhZDBhNzgyNWU1YWhG9g==: --dhchap-ctrl-secret DHHC-1:01:MWExZWIzMWExOGNkY2JiZjM5ZWQ1N2QyNzE4NjJkMDOQlR/2: 00:18:42.064 20:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:42.064 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:42.064 20:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:18:42.064 20:18:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.064 20:18:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.064 20:18:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.064 20:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:42.064 20:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:42.064 20:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:42.322 20:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:18:42.322 20:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:42.322 20:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:42.322 20:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:42.322 20:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:42.322 20:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:42.322 20:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-key key3 00:18:42.322 20:18:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.322 20:18:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.322 20:18:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.322 20:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:42.322 20:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:42.580 00:18:42.580 20:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:42.580 20:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:42.580 20:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:42.839 20:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.839 20:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:42.839 20:18:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.839 20:18:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.839 20:18:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.839 20:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:42.839 { 00:18:42.839 "auth": { 00:18:42.839 "dhgroup": "ffdhe3072", 00:18:42.839 "digest": "sha512", 00:18:42.839 "state": "completed" 00:18:42.839 }, 00:18:42.839 "cntlid": 119, 00:18:42.839 "listen_address": { 00:18:42.839 "adrfam": "IPv4", 00:18:42.839 "traddr": "10.0.0.2", 00:18:42.839 "trsvcid": "4420", 00:18:42.839 "trtype": "TCP" 00:18:42.839 }, 00:18:42.839 "peer_address": { 00:18:42.839 "adrfam": "IPv4", 00:18:42.839 "traddr": "10.0.0.1", 00:18:42.839 "trsvcid": "36278", 00:18:42.839 "trtype": "TCP" 00:18:42.839 }, 00:18:42.839 "qid": 0, 00:18:42.839 "state": "enabled" 00:18:42.839 } 00:18:42.839 ]' 00:18:42.839 20:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:42.839 20:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:42.839 20:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:43.098 20:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:43.098 20:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:43.098 20:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:43.098 20:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:43.098 20:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:43.357 20:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-secret DHHC-1:03:MGY2MzI3MDIzM2RkNzVlN2I5YWQ3NWNhZjFhOWE1NjYzYmNjMDY1OTJhNjUxNDJiZWIyNjkxNmRmNTMxZDZiNSX1tGc=: 00:18:43.924 20:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:43.924 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:43.924 20:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:18:43.924 20:18:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.924 20:18:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.924 20:18:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.924 20:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:43.924 20:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:43.924 20:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:43.924 20:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:44.183 20:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:18:44.183 20:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:44.183 20:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:44.183 20:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:44.183 20:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:44.183 20:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:44.183 20:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:44.183 20:18:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.183 20:18:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.441 20:18:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.441 20:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:44.441 20:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:44.700 00:18:44.700 20:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:44.700 20:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:44.700 20:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:44.978 20:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.978 20:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:44.978 20:18:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.978 20:18:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.979 20:18:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.979 20:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:44.979 { 00:18:44.979 "auth": { 00:18:44.979 "dhgroup": "ffdhe4096", 00:18:44.979 "digest": "sha512", 00:18:44.979 "state": "completed" 00:18:44.979 }, 00:18:44.979 "cntlid": 121, 00:18:44.979 "listen_address": { 00:18:44.979 "adrfam": "IPv4", 00:18:44.979 "traddr": "10.0.0.2", 00:18:44.979 "trsvcid": "4420", 00:18:44.979 "trtype": "TCP" 00:18:44.979 }, 00:18:44.979 "peer_address": { 00:18:44.979 "adrfam": "IPv4", 00:18:44.979 "traddr": "10.0.0.1", 00:18:44.979 "trsvcid": "36306", 00:18:44.979 "trtype": "TCP" 00:18:44.979 }, 00:18:44.979 "qid": 0, 00:18:44.979 "state": "enabled" 00:18:44.979 } 00:18:44.979 ]' 00:18:44.979 20:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:44.979 20:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:44.979 20:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:44.979 20:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:44.979 20:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:45.251 20:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:45.251 20:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:45.251 20:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:45.251 20:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-secret DHHC-1:00:YzY2ZjJhNmJmZDliZTE5YTk0MDZmNWRlMjY2ZmM1NWZhODkzYzExZjUwMmY2NGVhRC7lxQ==: --dhchap-ctrl-secret DHHC-1:03:MTcyZmVlMGE2MjM3MThmYTJiMzUyYjJkYzg2OThmOWI4MTAzNTlkYjU5Njk2YTM3M2E5M2M0MWMzN2NjMTU2YnO3Ax0=: 00:18:46.187 20:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:46.187 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:46.187 20:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:18:46.187 20:18:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.187 20:18:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.187 20:18:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.187 20:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:46.187 20:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:46.187 20:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:46.187 20:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:18:46.187 20:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:46.187 20:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:46.187 20:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:46.187 20:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:46.187 20:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:46.187 20:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:46.187 20:18:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.187 20:18:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.187 20:18:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.187 20:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:46.187 20:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:46.757 00:18:46.757 20:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:46.757 20:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:46.757 20:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:47.016 20:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.016 20:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:47.016 20:18:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.016 20:18:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.016 20:18:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.016 20:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:47.016 { 00:18:47.016 "auth": { 00:18:47.016 "dhgroup": "ffdhe4096", 00:18:47.016 "digest": "sha512", 00:18:47.016 "state": "completed" 00:18:47.016 }, 00:18:47.016 "cntlid": 123, 00:18:47.016 "listen_address": { 00:18:47.016 "adrfam": "IPv4", 00:18:47.016 "traddr": "10.0.0.2", 00:18:47.016 "trsvcid": "4420", 00:18:47.016 "trtype": "TCP" 00:18:47.016 }, 00:18:47.016 "peer_address": { 00:18:47.016 "adrfam": "IPv4", 00:18:47.016 "traddr": "10.0.0.1", 00:18:47.016 "trsvcid": "36334", 00:18:47.016 "trtype": "TCP" 00:18:47.016 }, 00:18:47.016 "qid": 0, 00:18:47.016 "state": "enabled" 00:18:47.016 } 00:18:47.016 ]' 00:18:47.016 20:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:47.016 20:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:47.016 20:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:47.016 20:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:47.016 20:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:47.016 20:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:47.016 20:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:47.016 20:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:47.275 20:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-secret DHHC-1:01:YjIwMzgwODAzNWNmYTNlNmUzNDE3ZGFmMTQzOWViODGfNR8X: --dhchap-ctrl-secret DHHC-1:02:YjBlNDY1NzgzMWYwY2ViOGZhNWFlOWFkNjAwMzk2YTA4NTkzNTI5NTljZGE3NDYyV+vFWA==: 00:18:48.211 20:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:48.211 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:48.211 20:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:18:48.211 20:18:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.211 20:18:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.211 20:18:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.211 20:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:48.212 20:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:48.212 20:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:48.212 20:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:18:48.212 20:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:48.212 20:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:48.212 20:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:48.212 20:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:48.212 20:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:48.212 20:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:48.212 20:18:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.212 20:18:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.212 20:18:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.212 20:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:48.212 20:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:48.778 00:18:48.778 20:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:48.778 20:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:48.778 20:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:49.037 20:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.037 20:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:49.037 20:18:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.037 20:18:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.037 20:18:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.037 20:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:49.037 { 00:18:49.037 "auth": { 00:18:49.037 "dhgroup": "ffdhe4096", 00:18:49.037 "digest": "sha512", 00:18:49.037 "state": "completed" 00:18:49.037 }, 00:18:49.037 "cntlid": 125, 00:18:49.037 "listen_address": { 00:18:49.037 "adrfam": "IPv4", 00:18:49.037 "traddr": "10.0.0.2", 00:18:49.037 "trsvcid": "4420", 00:18:49.037 "trtype": "TCP" 00:18:49.037 }, 00:18:49.037 "peer_address": { 00:18:49.037 "adrfam": "IPv4", 00:18:49.037 "traddr": "10.0.0.1", 00:18:49.037 "trsvcid": "34628", 00:18:49.037 "trtype": "TCP" 00:18:49.037 }, 00:18:49.037 "qid": 0, 00:18:49.037 "state": "enabled" 00:18:49.037 } 00:18:49.037 ]' 00:18:49.037 20:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:49.037 20:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:49.037 20:18:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:49.037 20:18:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:49.037 20:18:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:49.037 20:18:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:49.037 20:18:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:49.037 20:18:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:49.604 20:18:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-secret DHHC-1:02:OWMzOGI4NDQ4ZjY3NWY2OWQ5ZmI3ZmRkMjQyYTQzMTQ3MzdiZGRhZDBhNzgyNWU1YWhG9g==: --dhchap-ctrl-secret DHHC-1:01:MWExZWIzMWExOGNkY2JiZjM5ZWQ1N2QyNzE4NjJkMDOQlR/2: 00:18:50.172 20:18:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:50.172 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:50.172 20:18:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:18:50.172 20:18:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.172 20:18:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.172 20:18:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.172 20:18:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:50.172 20:18:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:50.172 20:18:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:50.172 20:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:18:50.172 20:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:50.172 20:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:50.172 20:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:50.172 20:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:50.172 20:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:50.172 20:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-key key3 00:18:50.172 20:18:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.172 20:18:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.172 20:18:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.172 20:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:50.172 20:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:50.741 00:18:50.741 20:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:50.741 20:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:50.741 20:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:51.000 20:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.000 20:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:51.000 20:18:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.000 20:18:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.000 20:18:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.000 20:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:51.000 { 00:18:51.000 "auth": { 00:18:51.000 "dhgroup": "ffdhe4096", 00:18:51.000 "digest": "sha512", 00:18:51.000 "state": "completed" 00:18:51.000 }, 00:18:51.000 "cntlid": 127, 00:18:51.000 "listen_address": { 00:18:51.000 "adrfam": "IPv4", 00:18:51.000 "traddr": "10.0.0.2", 00:18:51.000 "trsvcid": "4420", 00:18:51.000 "trtype": "TCP" 00:18:51.000 }, 00:18:51.000 "peer_address": { 00:18:51.000 "adrfam": "IPv4", 00:18:51.000 "traddr": "10.0.0.1", 00:18:51.000 "trsvcid": "34642", 00:18:51.000 "trtype": "TCP" 00:18:51.000 }, 00:18:51.000 "qid": 0, 00:18:51.000 "state": "enabled" 00:18:51.000 } 00:18:51.000 ]' 00:18:51.000 20:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:51.000 20:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:51.000 20:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:51.000 20:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:51.000 20:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:51.000 20:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:51.000 20:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:51.000 20:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:51.259 20:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-secret DHHC-1:03:MGY2MzI3MDIzM2RkNzVlN2I5YWQ3NWNhZjFhOWE1NjYzYmNjMDY1OTJhNjUxNDJiZWIyNjkxNmRmNTMxZDZiNSX1tGc=: 00:18:51.827 20:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:51.827 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:51.827 20:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:18:51.827 20:18:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.827 20:18:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.828 20:18:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.828 20:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:51.828 20:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:51.828 20:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:51.828 20:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:52.395 20:18:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:18:52.395 20:18:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:52.395 20:18:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:52.395 20:18:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:52.395 20:18:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:52.395 20:18:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:52.395 20:18:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:52.395 20:18:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.395 20:18:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.395 20:18:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.395 20:18:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:52.395 20:18:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:52.654 00:18:52.654 20:18:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:52.654 20:18:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:52.654 20:18:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:52.915 20:18:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.915 20:18:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:52.915 20:18:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.915 20:18:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.915 20:18:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.915 20:18:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:52.915 { 00:18:52.915 "auth": { 00:18:52.915 "dhgroup": "ffdhe6144", 00:18:52.915 "digest": "sha512", 00:18:52.915 "state": "completed" 00:18:52.915 }, 00:18:52.915 "cntlid": 129, 00:18:52.915 "listen_address": { 00:18:52.915 "adrfam": "IPv4", 00:18:52.915 "traddr": "10.0.0.2", 00:18:52.915 "trsvcid": "4420", 00:18:52.915 "trtype": "TCP" 00:18:52.915 }, 00:18:52.915 "peer_address": { 00:18:52.915 "adrfam": "IPv4", 00:18:52.915 "traddr": "10.0.0.1", 00:18:52.915 "trsvcid": "34680", 00:18:52.915 "trtype": "TCP" 00:18:52.915 }, 00:18:52.915 "qid": 0, 00:18:52.915 "state": "enabled" 00:18:52.915 } 00:18:52.915 ]' 00:18:52.915 20:18:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:53.174 20:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:53.174 20:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:53.174 20:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:53.174 20:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:53.174 20:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:53.174 20:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:53.174 20:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:53.432 20:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-secret DHHC-1:00:YzY2ZjJhNmJmZDliZTE5YTk0MDZmNWRlMjY2ZmM1NWZhODkzYzExZjUwMmY2NGVhRC7lxQ==: --dhchap-ctrl-secret DHHC-1:03:MTcyZmVlMGE2MjM3MThmYTJiMzUyYjJkYzg2OThmOWI4MTAzNTlkYjU5Njk2YTM3M2E5M2M0MWMzN2NjMTU2YnO3Ax0=: 00:18:54.000 20:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:54.000 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:54.000 20:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:18:54.000 20:18:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.000 20:18:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.000 20:18:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.000 20:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:54.000 20:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:54.000 20:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:54.261 20:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:18:54.262 20:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:54.262 20:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:54.262 20:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:54.262 20:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:54.262 20:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:54.262 20:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:54.262 20:18:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.262 20:18:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.262 20:18:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.262 20:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:54.262 20:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:54.522 00:18:54.522 20:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:54.522 20:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:54.522 20:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:54.781 20:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.781 20:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:54.781 20:18:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.781 20:18:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.040 20:18:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.040 20:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:55.040 { 00:18:55.040 "auth": { 00:18:55.040 "dhgroup": "ffdhe6144", 00:18:55.040 "digest": "sha512", 00:18:55.040 "state": "completed" 00:18:55.040 }, 00:18:55.040 "cntlid": 131, 00:18:55.040 "listen_address": { 00:18:55.040 "adrfam": "IPv4", 00:18:55.040 "traddr": "10.0.0.2", 00:18:55.040 "trsvcid": "4420", 00:18:55.040 "trtype": "TCP" 00:18:55.040 }, 00:18:55.040 "peer_address": { 00:18:55.040 "adrfam": "IPv4", 00:18:55.040 "traddr": "10.0.0.1", 00:18:55.040 "trsvcid": "34710", 00:18:55.040 "trtype": "TCP" 00:18:55.040 }, 00:18:55.040 "qid": 0, 00:18:55.040 "state": "enabled" 00:18:55.040 } 00:18:55.040 ]' 00:18:55.040 20:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:55.040 20:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:55.040 20:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:55.040 20:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:55.040 20:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:55.040 20:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:55.040 20:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:55.040 20:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:55.299 20:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-secret DHHC-1:01:YjIwMzgwODAzNWNmYTNlNmUzNDE3ZGFmMTQzOWViODGfNR8X: --dhchap-ctrl-secret DHHC-1:02:YjBlNDY1NzgzMWYwY2ViOGZhNWFlOWFkNjAwMzk2YTA4NTkzNTI5NTljZGE3NDYyV+vFWA==: 00:18:55.866 20:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:55.866 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:55.866 20:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:18:55.866 20:18:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.866 20:18:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.866 20:18:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.866 20:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:55.866 20:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:55.866 20:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:56.434 20:18:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:18:56.434 20:18:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:56.434 20:18:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:56.434 20:18:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:56.434 20:18:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:56.434 20:18:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:56.434 20:18:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:56.434 20:18:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.434 20:18:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.434 20:18:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.434 20:18:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:56.434 20:18:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:56.692 00:18:56.693 20:18:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:56.693 20:18:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:56.693 20:18:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:56.952 20:18:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.952 20:18:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:56.952 20:18:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.952 20:18:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.952 20:18:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.952 20:18:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:56.952 { 00:18:56.952 "auth": { 00:18:56.952 "dhgroup": "ffdhe6144", 00:18:56.952 "digest": "sha512", 00:18:56.952 "state": "completed" 00:18:56.952 }, 00:18:56.952 "cntlid": 133, 00:18:56.952 "listen_address": { 00:18:56.952 "adrfam": "IPv4", 00:18:56.952 "traddr": "10.0.0.2", 00:18:56.952 "trsvcid": "4420", 00:18:56.952 "trtype": "TCP" 00:18:56.952 }, 00:18:56.952 "peer_address": { 00:18:56.952 "adrfam": "IPv4", 00:18:56.952 "traddr": "10.0.0.1", 00:18:56.952 "trsvcid": "34740", 00:18:56.952 "trtype": "TCP" 00:18:56.952 }, 00:18:56.952 "qid": 0, 00:18:56.952 "state": "enabled" 00:18:56.952 } 00:18:56.952 ]' 00:18:56.952 20:18:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:56.952 20:18:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:56.952 20:18:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:56.952 20:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:56.952 20:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:57.211 20:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:57.211 20:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:57.211 20:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:57.470 20:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-secret DHHC-1:02:OWMzOGI4NDQ4ZjY3NWY2OWQ5ZmI3ZmRkMjQyYTQzMTQ3MzdiZGRhZDBhNzgyNWU1YWhG9g==: --dhchap-ctrl-secret DHHC-1:01:MWExZWIzMWExOGNkY2JiZjM5ZWQ1N2QyNzE4NjJkMDOQlR/2: 00:18:58.037 20:18:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:58.037 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:58.037 20:18:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:18:58.037 20:18:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.037 20:18:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.037 20:18:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.037 20:18:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:58.037 20:18:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:58.037 20:18:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:58.295 20:18:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:18:58.295 20:18:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:58.295 20:18:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:58.295 20:18:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:58.295 20:18:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:58.295 20:18:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:58.295 20:18:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-key key3 00:18:58.295 20:18:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.295 20:18:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.295 20:18:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.295 20:18:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:58.295 20:18:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:58.861 00:18:58.861 20:18:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:58.861 20:18:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:58.861 20:18:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:59.120 20:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.120 20:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:59.120 20:18:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.120 20:18:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.120 20:18:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.120 20:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:59.120 { 00:18:59.120 "auth": { 00:18:59.120 "dhgroup": "ffdhe6144", 00:18:59.120 "digest": "sha512", 00:18:59.120 "state": "completed" 00:18:59.120 }, 00:18:59.120 "cntlid": 135, 00:18:59.120 "listen_address": { 00:18:59.120 "adrfam": "IPv4", 00:18:59.120 "traddr": "10.0.0.2", 00:18:59.120 "trsvcid": "4420", 00:18:59.120 "trtype": "TCP" 00:18:59.120 }, 00:18:59.120 "peer_address": { 00:18:59.120 "adrfam": "IPv4", 00:18:59.120 "traddr": "10.0.0.1", 00:18:59.120 "trsvcid": "34030", 00:18:59.120 "trtype": "TCP" 00:18:59.120 }, 00:18:59.120 "qid": 0, 00:18:59.120 "state": "enabled" 00:18:59.120 } 00:18:59.120 ]' 00:18:59.120 20:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:59.120 20:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:59.120 20:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:59.379 20:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:59.379 20:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:59.379 20:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:59.379 20:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:59.379 20:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:59.638 20:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-secret DHHC-1:03:MGY2MzI3MDIzM2RkNzVlN2I5YWQ3NWNhZjFhOWE1NjYzYmNjMDY1OTJhNjUxNDJiZWIyNjkxNmRmNTMxZDZiNSX1tGc=: 00:19:00.203 20:18:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:00.203 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:00.203 20:18:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:19:00.203 20:18:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.203 20:18:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.203 20:18:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.203 20:18:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:00.203 20:18:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:00.203 20:18:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:00.203 20:18:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:00.461 20:18:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:19:00.461 20:18:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:00.461 20:18:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:00.461 20:18:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:00.461 20:18:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:00.461 20:18:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:00.461 20:18:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:00.461 20:18:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.461 20:18:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.461 20:18:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.461 20:18:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:00.462 20:18:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:01.396 00:19:01.396 20:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:01.396 20:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:01.396 20:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:01.396 20:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:01.396 20:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:01.396 20:18:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.396 20:18:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.397 20:18:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.397 20:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:01.397 { 00:19:01.397 "auth": { 00:19:01.397 "dhgroup": "ffdhe8192", 00:19:01.397 "digest": "sha512", 00:19:01.397 "state": "completed" 00:19:01.397 }, 00:19:01.397 "cntlid": 137, 00:19:01.397 "listen_address": { 00:19:01.397 "adrfam": "IPv4", 00:19:01.397 "traddr": "10.0.0.2", 00:19:01.397 "trsvcid": "4420", 00:19:01.397 "trtype": "TCP" 00:19:01.397 }, 00:19:01.397 "peer_address": { 00:19:01.397 "adrfam": "IPv4", 00:19:01.397 "traddr": "10.0.0.1", 00:19:01.397 "trsvcid": "34050", 00:19:01.397 "trtype": "TCP" 00:19:01.397 }, 00:19:01.397 "qid": 0, 00:19:01.397 "state": "enabled" 00:19:01.397 } 00:19:01.397 ]' 00:19:01.397 20:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:01.397 20:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:01.397 20:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:01.655 20:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:01.655 20:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:01.655 20:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:01.655 20:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:01.655 20:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:01.913 20:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-secret DHHC-1:00:YzY2ZjJhNmJmZDliZTE5YTk0MDZmNWRlMjY2ZmM1NWZhODkzYzExZjUwMmY2NGVhRC7lxQ==: --dhchap-ctrl-secret DHHC-1:03:MTcyZmVlMGE2MjM3MThmYTJiMzUyYjJkYzg2OThmOWI4MTAzNTlkYjU5Njk2YTM3M2E5M2M0MWMzN2NjMTU2YnO3Ax0=: 00:19:02.479 20:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:02.479 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:02.479 20:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:19:02.479 20:18:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.479 20:18:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.479 20:18:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.479 20:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:02.479 20:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:02.479 20:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:02.740 20:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:19:02.741 20:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:02.741 20:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:02.741 20:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:02.741 20:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:02.741 20:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:02.741 20:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:02.741 20:18:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.741 20:18:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.741 20:18:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.741 20:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:02.741 20:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:03.308 00:19:03.308 20:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:03.308 20:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:03.308 20:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:03.566 20:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.566 20:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:03.566 20:18:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.566 20:18:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.566 20:18:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.566 20:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:03.566 { 00:19:03.566 "auth": { 00:19:03.566 "dhgroup": "ffdhe8192", 00:19:03.566 "digest": "sha512", 00:19:03.566 "state": "completed" 00:19:03.566 }, 00:19:03.566 "cntlid": 139, 00:19:03.566 "listen_address": { 00:19:03.566 "adrfam": "IPv4", 00:19:03.566 "traddr": "10.0.0.2", 00:19:03.566 "trsvcid": "4420", 00:19:03.566 "trtype": "TCP" 00:19:03.566 }, 00:19:03.566 "peer_address": { 00:19:03.566 "adrfam": "IPv4", 00:19:03.566 "traddr": "10.0.0.1", 00:19:03.566 "trsvcid": "34086", 00:19:03.566 "trtype": "TCP" 00:19:03.566 }, 00:19:03.566 "qid": 0, 00:19:03.566 "state": "enabled" 00:19:03.566 } 00:19:03.566 ]' 00:19:03.566 20:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:03.566 20:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:03.566 20:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:03.566 20:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:03.566 20:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:03.836 20:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:03.836 20:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:03.836 20:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:04.131 20:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-secret DHHC-1:01:YjIwMzgwODAzNWNmYTNlNmUzNDE3ZGFmMTQzOWViODGfNR8X: --dhchap-ctrl-secret DHHC-1:02:YjBlNDY1NzgzMWYwY2ViOGZhNWFlOWFkNjAwMzk2YTA4NTkzNTI5NTljZGE3NDYyV+vFWA==: 00:19:04.712 20:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:04.712 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:04.712 20:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:19:04.712 20:18:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.712 20:18:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.712 20:18:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.712 20:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:04.712 20:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:04.712 20:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:04.712 20:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:19:04.712 20:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:04.712 20:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:04.712 20:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:04.712 20:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:04.712 20:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:04.712 20:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:04.712 20:18:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.712 20:18:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.712 20:18:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.712 20:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:04.712 20:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:05.647 00:19:05.647 20:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:05.647 20:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:05.647 20:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:05.647 20:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.647 20:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:05.647 20:18:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.647 20:18:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.647 20:18:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.647 20:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:05.647 { 00:19:05.647 "auth": { 00:19:05.647 "dhgroup": "ffdhe8192", 00:19:05.647 "digest": "sha512", 00:19:05.647 "state": "completed" 00:19:05.647 }, 00:19:05.647 "cntlid": 141, 00:19:05.647 "listen_address": { 00:19:05.647 "adrfam": "IPv4", 00:19:05.647 "traddr": "10.0.0.2", 00:19:05.647 "trsvcid": "4420", 00:19:05.647 "trtype": "TCP" 00:19:05.647 }, 00:19:05.647 "peer_address": { 00:19:05.647 "adrfam": "IPv4", 00:19:05.647 "traddr": "10.0.0.1", 00:19:05.647 "trsvcid": "34122", 00:19:05.647 "trtype": "TCP" 00:19:05.647 }, 00:19:05.647 "qid": 0, 00:19:05.647 "state": "enabled" 00:19:05.647 } 00:19:05.647 ]' 00:19:05.906 20:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:05.906 20:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:05.906 20:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:05.906 20:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:05.906 20:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:05.906 20:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:05.906 20:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:05.906 20:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:06.165 20:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-secret DHHC-1:02:OWMzOGI4NDQ4ZjY3NWY2OWQ5ZmI3ZmRkMjQyYTQzMTQ3MzdiZGRhZDBhNzgyNWU1YWhG9g==: --dhchap-ctrl-secret DHHC-1:01:MWExZWIzMWExOGNkY2JiZjM5ZWQ1N2QyNzE4NjJkMDOQlR/2: 00:19:06.733 20:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:06.733 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:06.733 20:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:19:06.733 20:18:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.733 20:18:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.733 20:18:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.733 20:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:06.733 20:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:06.733 20:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:06.991 20:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:19:06.991 20:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:06.991 20:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:06.991 20:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:06.991 20:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:06.991 20:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:06.991 20:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-key key3 00:19:06.991 20:18:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.991 20:18:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.991 20:18:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.991 20:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:06.991 20:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:07.926 00:19:07.926 20:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:07.926 20:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:07.926 20:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:07.926 20:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.926 20:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:07.926 20:18:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.926 20:18:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.926 20:18:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.926 20:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:07.926 { 00:19:07.926 "auth": { 00:19:07.926 "dhgroup": "ffdhe8192", 00:19:07.926 "digest": "sha512", 00:19:07.926 "state": "completed" 00:19:07.926 }, 00:19:07.926 "cntlid": 143, 00:19:07.926 "listen_address": { 00:19:07.926 "adrfam": "IPv4", 00:19:07.926 "traddr": "10.0.0.2", 00:19:07.926 "trsvcid": "4420", 00:19:07.926 "trtype": "TCP" 00:19:07.926 }, 00:19:07.926 "peer_address": { 00:19:07.926 "adrfam": "IPv4", 00:19:07.926 "traddr": "10.0.0.1", 00:19:07.926 "trsvcid": "40472", 00:19:07.926 "trtype": "TCP" 00:19:07.926 }, 00:19:07.926 "qid": 0, 00:19:07.926 "state": "enabled" 00:19:07.926 } 00:19:07.926 ]' 00:19:07.926 20:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:08.185 20:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:08.185 20:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:08.185 20:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:08.185 20:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:08.185 20:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:08.185 20:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:08.185 20:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:08.444 20:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-secret DHHC-1:03:MGY2MzI3MDIzM2RkNzVlN2I5YWQ3NWNhZjFhOWE1NjYzYmNjMDY1OTJhNjUxNDJiZWIyNjkxNmRmNTMxZDZiNSX1tGc=: 00:19:09.011 20:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:09.269 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:09.269 20:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:19:09.269 20:18:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.269 20:18:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.269 20:18:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.269 20:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:19:09.269 20:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:19:09.269 20:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:19:09.269 20:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:09.269 20:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:09.269 20:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:09.528 20:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:19:09.528 20:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:09.528 20:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:09.528 20:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:09.528 20:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:09.528 20:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:09.528 20:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:09.528 20:18:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.528 20:18:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.528 20:18:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.528 20:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:09.528 20:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:10.095 00:19:10.095 20:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:10.095 20:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:10.095 20:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:10.354 20:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.354 20:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:10.354 20:18:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.354 20:18:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.354 20:18:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.354 20:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:10.354 { 00:19:10.354 "auth": { 00:19:10.354 "dhgroup": "ffdhe8192", 00:19:10.354 "digest": "sha512", 00:19:10.354 "state": "completed" 00:19:10.354 }, 00:19:10.354 "cntlid": 145, 00:19:10.354 "listen_address": { 00:19:10.354 "adrfam": "IPv4", 00:19:10.354 "traddr": "10.0.0.2", 00:19:10.354 "trsvcid": "4420", 00:19:10.354 "trtype": "TCP" 00:19:10.354 }, 00:19:10.354 "peer_address": { 00:19:10.354 "adrfam": "IPv4", 00:19:10.354 "traddr": "10.0.0.1", 00:19:10.354 "trsvcid": "40504", 00:19:10.354 "trtype": "TCP" 00:19:10.354 }, 00:19:10.354 "qid": 0, 00:19:10.354 "state": "enabled" 00:19:10.354 } 00:19:10.354 ]' 00:19:10.354 20:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:10.354 20:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:10.354 20:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:10.354 20:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:10.354 20:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:10.354 20:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:10.354 20:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:10.354 20:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:10.613 20:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-secret DHHC-1:00:YzY2ZjJhNmJmZDliZTE5YTk0MDZmNWRlMjY2ZmM1NWZhODkzYzExZjUwMmY2NGVhRC7lxQ==: --dhchap-ctrl-secret DHHC-1:03:MTcyZmVlMGE2MjM3MThmYTJiMzUyYjJkYzg2OThmOWI4MTAzNTlkYjU5Njk2YTM3M2E5M2M0MWMzN2NjMTU2YnO3Ax0=: 00:19:11.180 20:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:11.180 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:11.180 20:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:19:11.180 20:19:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.180 20:19:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.180 20:19:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.180 20:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-key key1 00:19:11.180 20:19:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.181 20:19:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.181 20:19:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.181 20:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:11.181 20:19:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:11.181 20:19:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:11.181 20:19:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:11.181 20:19:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:11.181 20:19:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:11.181 20:19:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:11.181 20:19:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:11.181 20:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:11.748 2024/07/14 20:19:00 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 dhchap_key:key2 hostnqn:nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 name:nvme0 subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:19:11.748 request: 00:19:11.748 { 00:19:11.748 "method": "bdev_nvme_attach_controller", 00:19:11.748 "params": { 00:19:11.748 "name": "nvme0", 00:19:11.748 "trtype": "tcp", 00:19:11.748 "traddr": "10.0.0.2", 00:19:11.748 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4", 00:19:11.748 "adrfam": "ipv4", 00:19:11.748 "trsvcid": "4420", 00:19:11.748 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:11.748 "dhchap_key": "key2" 00:19:11.748 } 00:19:11.748 } 00:19:11.748 Got JSON-RPC error response 00:19:11.748 GoRPCClient: error on JSON-RPC call 00:19:11.748 20:19:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:11.748 20:19:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:11.748 20:19:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:11.748 20:19:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:11.748 20:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:19:11.748 20:19:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.748 20:19:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.748 20:19:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.748 20:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:11.748 20:19:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.748 20:19:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.748 20:19:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.748 20:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:11.748 20:19:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:11.748 20:19:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:11.748 20:19:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:11.748 20:19:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:11.748 20:19:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:11.748 20:19:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:11.748 20:19:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:11.748 20:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:12.316 2024/07/14 20:19:01 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 dhchap_ctrlr_key:ckey2 dhchap_key:key1 hostnqn:nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 name:nvme0 subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:19:12.316 request: 00:19:12.316 { 00:19:12.316 "method": "bdev_nvme_attach_controller", 00:19:12.316 "params": { 00:19:12.316 "name": "nvme0", 00:19:12.316 "trtype": "tcp", 00:19:12.316 "traddr": "10.0.0.2", 00:19:12.316 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4", 00:19:12.316 "adrfam": "ipv4", 00:19:12.316 "trsvcid": "4420", 00:19:12.316 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:12.316 "dhchap_key": "key1", 00:19:12.316 "dhchap_ctrlr_key": "ckey2" 00:19:12.316 } 00:19:12.316 } 00:19:12.316 Got JSON-RPC error response 00:19:12.316 GoRPCClient: error on JSON-RPC call 00:19:12.316 20:19:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:12.316 20:19:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:12.316 20:19:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:12.316 20:19:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:12.316 20:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:19:12.316 20:19:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.316 20:19:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.316 20:19:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.316 20:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-key key1 00:19:12.316 20:19:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.316 20:19:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.316 20:19:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.316 20:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:12.316 20:19:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:12.316 20:19:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:12.316 20:19:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:12.316 20:19:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:12.316 20:19:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:12.316 20:19:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:12.316 20:19:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:12.316 20:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:12.884 2024/07/14 20:19:01 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 dhchap_ctrlr_key:ckey1 dhchap_key:key1 hostnqn:nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 name:nvme0 subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:19:12.884 request: 00:19:12.884 { 00:19:12.884 "method": "bdev_nvme_attach_controller", 00:19:12.884 "params": { 00:19:12.884 "name": "nvme0", 00:19:12.884 "trtype": "tcp", 00:19:12.884 "traddr": "10.0.0.2", 00:19:12.884 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4", 00:19:12.884 "adrfam": "ipv4", 00:19:12.884 "trsvcid": "4420", 00:19:12.884 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:12.884 "dhchap_key": "key1", 00:19:12.884 "dhchap_ctrlr_key": "ckey1" 00:19:12.884 } 00:19:12.884 } 00:19:12.884 Got JSON-RPC error response 00:19:12.884 GoRPCClient: error on JSON-RPC call 00:19:12.884 20:19:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:12.884 20:19:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:12.884 20:19:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:12.884 20:19:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:12.884 20:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:19:12.884 20:19:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.884 20:19:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.884 20:19:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.884 20:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 93632 00:19:12.884 20:19:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 93632 ']' 00:19:12.884 20:19:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 93632 00:19:12.884 20:19:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:19:12.884 20:19:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:12.884 20:19:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 93632 00:19:12.884 killing process with pid 93632 00:19:12.884 20:19:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:12.884 20:19:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:12.884 20:19:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 93632' 00:19:12.884 20:19:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 93632 00:19:12.884 20:19:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 93632 00:19:13.142 20:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:19:13.142 20:19:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:13.142 20:19:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:13.142 20:19:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.401 20:19:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=98407 00:19:13.401 20:19:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:19:13.401 20:19:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 98407 00:19:13.401 20:19:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 98407 ']' 00:19:13.401 20:19:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:13.401 20:19:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:13.401 20:19:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:13.401 20:19:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:13.401 20:19:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.343 20:19:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:14.343 20:19:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:19:14.343 20:19:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:14.343 20:19:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:14.343 20:19:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.343 20:19:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:14.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:14.343 20:19:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:14.343 20:19:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 98407 00:19:14.343 20:19:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 98407 ']' 00:19:14.343 20:19:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:14.343 20:19:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:14.343 20:19:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:14.343 20:19:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:14.343 20:19:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.602 20:19:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:14.602 20:19:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:19:14.602 20:19:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:19:14.602 20:19:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.602 20:19:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.861 20:19:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.861 20:19:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:19:14.861 20:19:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:14.861 20:19:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:14.861 20:19:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:14.861 20:19:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:14.861 20:19:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:14.861 20:19:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-key key3 00:19:14.861 20:19:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.861 20:19:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.861 20:19:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.861 20:19:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:14.861 20:19:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:15.429 00:19:15.429 20:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:15.429 20:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:15.429 20:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:15.687 20:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.687 20:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:15.687 20:19:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.687 20:19:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.687 20:19:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.687 20:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:15.687 { 00:19:15.687 "auth": { 00:19:15.687 "dhgroup": "ffdhe8192", 00:19:15.687 "digest": "sha512", 00:19:15.687 "state": "completed" 00:19:15.687 }, 00:19:15.687 "cntlid": 1, 00:19:15.687 "listen_address": { 00:19:15.687 "adrfam": "IPv4", 00:19:15.687 "traddr": "10.0.0.2", 00:19:15.687 "trsvcid": "4420", 00:19:15.687 "trtype": "TCP" 00:19:15.687 }, 00:19:15.687 "peer_address": { 00:19:15.687 "adrfam": "IPv4", 00:19:15.687 "traddr": "10.0.0.1", 00:19:15.687 "trsvcid": "40548", 00:19:15.687 "trtype": "TCP" 00:19:15.687 }, 00:19:15.687 "qid": 0, 00:19:15.687 "state": "enabled" 00:19:15.687 } 00:19:15.687 ]' 00:19:15.687 20:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:15.687 20:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:15.687 20:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:15.687 20:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:15.687 20:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:15.945 20:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:15.945 20:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:15.945 20:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:16.203 20:19:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-secret DHHC-1:03:MGY2MzI3MDIzM2RkNzVlN2I5YWQ3NWNhZjFhOWE1NjYzYmNjMDY1OTJhNjUxNDJiZWIyNjkxNmRmNTMxZDZiNSX1tGc=: 00:19:16.769 20:19:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:16.769 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:16.769 20:19:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:19:16.769 20:19:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.769 20:19:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.769 20:19:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.769 20:19:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --dhchap-key key3 00:19:16.769 20:19:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.769 20:19:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.769 20:19:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.769 20:19:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:19:16.769 20:19:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:19:17.027 20:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:17.027 20:19:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:17.027 20:19:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:17.027 20:19:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:17.027 20:19:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:17.027 20:19:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:17.027 20:19:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:17.027 20:19:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:17.027 20:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:17.284 2024/07/14 20:19:06 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 dhchap_key:key3 hostnqn:nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 name:nvme0 subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:19:17.285 request: 00:19:17.285 { 00:19:17.285 "method": "bdev_nvme_attach_controller", 00:19:17.285 "params": { 00:19:17.285 "name": "nvme0", 00:19:17.285 "trtype": "tcp", 00:19:17.285 "traddr": "10.0.0.2", 00:19:17.285 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4", 00:19:17.285 "adrfam": "ipv4", 00:19:17.285 "trsvcid": "4420", 00:19:17.285 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:17.285 "dhchap_key": "key3" 00:19:17.285 } 00:19:17.285 } 00:19:17.285 Got JSON-RPC error response 00:19:17.285 GoRPCClient: error on JSON-RPC call 00:19:17.285 20:19:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:17.285 20:19:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:17.285 20:19:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:17.285 20:19:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:17.285 20:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:19:17.285 20:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:19:17.285 20:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:17.285 20:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:17.543 20:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:17.543 20:19:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:17.543 20:19:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:17.543 20:19:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:17.543 20:19:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:17.543 20:19:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:17.543 20:19:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:17.543 20:19:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:17.543 20:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:17.802 2024/07/14 20:19:06 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 dhchap_key:key3 hostnqn:nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 name:nvme0 subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:19:17.802 request: 00:19:17.802 { 00:19:17.802 "method": "bdev_nvme_attach_controller", 00:19:17.802 "params": { 00:19:17.802 "name": "nvme0", 00:19:17.802 "trtype": "tcp", 00:19:17.802 "traddr": "10.0.0.2", 00:19:17.802 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4", 00:19:17.802 "adrfam": "ipv4", 00:19:17.802 "trsvcid": "4420", 00:19:17.802 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:17.802 "dhchap_key": "key3" 00:19:17.802 } 00:19:17.802 } 00:19:17.802 Got JSON-RPC error response 00:19:17.802 GoRPCClient: error on JSON-RPC call 00:19:17.802 20:19:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:17.802 20:19:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:17.802 20:19:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:17.802 20:19:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:17.802 20:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:19:17.802 20:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:19:17.802 20:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:19:17.802 20:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:17.802 20:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:17.802 20:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:18.061 20:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:19:18.061 20:19:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.061 20:19:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.061 20:19:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.061 20:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:19:18.061 20:19:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.061 20:19:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.061 20:19:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.061 20:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:18.061 20:19:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:18.061 20:19:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:18.061 20:19:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:18.061 20:19:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:18.061 20:19:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:18.061 20:19:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:18.061 20:19:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:18.061 20:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:18.319 2024/07/14 20:19:07 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 dhchap_ctrlr_key:key1 dhchap_key:key0 hostnqn:nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 name:nvme0 subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:19:18.319 request: 00:19:18.319 { 00:19:18.319 "method": "bdev_nvme_attach_controller", 00:19:18.319 "params": { 00:19:18.319 "name": "nvme0", 00:19:18.319 "trtype": "tcp", 00:19:18.319 "traddr": "10.0.0.2", 00:19:18.319 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4", 00:19:18.319 "adrfam": "ipv4", 00:19:18.319 "trsvcid": "4420", 00:19:18.319 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:18.319 "dhchap_key": "key0", 00:19:18.319 "dhchap_ctrlr_key": "key1" 00:19:18.319 } 00:19:18.319 } 00:19:18.319 Got JSON-RPC error response 00:19:18.319 GoRPCClient: error on JSON-RPC call 00:19:18.319 20:19:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:18.319 20:19:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:18.319 20:19:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:18.319 20:19:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:18.319 20:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:18.319 20:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:18.577 00:19:18.577 20:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:19:18.577 20:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:19:18.577 20:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:18.834 20:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.835 20:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:18.835 20:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:19.093 20:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:19:19.093 20:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:19:19.093 20:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 93676 00:19:19.093 20:19:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 93676 ']' 00:19:19.093 20:19:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 93676 00:19:19.093 20:19:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:19:19.093 20:19:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:19.093 20:19:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 93676 00:19:19.093 killing process with pid 93676 00:19:19.093 20:19:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:19:19.093 20:19:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:19:19.093 20:19:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 93676' 00:19:19.093 20:19:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 93676 00:19:19.093 20:19:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 93676 00:19:19.667 20:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:19:19.667 20:19:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:19.667 20:19:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:19:19.667 20:19:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:19.667 20:19:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:19:19.667 20:19:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:19.667 20:19:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:19.667 rmmod nvme_tcp 00:19:19.667 rmmod nvme_fabrics 00:19:19.667 rmmod nvme_keyring 00:19:19.925 20:19:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:19.925 20:19:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:19:19.925 20:19:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:19:19.925 20:19:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 98407 ']' 00:19:19.925 20:19:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 98407 00:19:19.925 20:19:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 98407 ']' 00:19:19.925 20:19:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 98407 00:19:19.925 20:19:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:19:19.925 20:19:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:19.925 20:19:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 98407 00:19:19.925 killing process with pid 98407 00:19:19.925 20:19:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:19.925 20:19:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:19.925 20:19:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 98407' 00:19:19.925 20:19:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 98407 00:19:19.925 20:19:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 98407 00:19:20.182 20:19:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:20.182 20:19:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:20.182 20:19:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:20.182 20:19:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:20.182 20:19:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:20.182 20:19:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:20.182 20:19:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:20.182 20:19:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:20.182 20:19:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:20.182 20:19:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.u44 /tmp/spdk.key-sha256.S8U /tmp/spdk.key-sha384.6HL /tmp/spdk.key-sha512.9um /tmp/spdk.key-sha512.zlK /tmp/spdk.key-sha384.bE2 /tmp/spdk.key-sha256.6Nm '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:19:20.182 00:19:20.182 real 2m43.736s 00:19:20.182 user 6m36.733s 00:19:20.182 sys 0m22.611s 00:19:20.182 20:19:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:20.182 ************************************ 00:19:20.182 END TEST nvmf_auth_target 00:19:20.182 ************************************ 00:19:20.182 20:19:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.182 20:19:09 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:19:20.182 20:19:09 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:20.182 20:19:09 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:19:20.182 20:19:09 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:20.182 20:19:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:20.182 ************************************ 00:19:20.182 START TEST nvmf_bdevio_no_huge 00:19:20.182 ************************************ 00:19:20.182 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:20.182 * Looking for test storage... 00:19:20.182 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:20.182 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:20.440 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:19:20.440 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:20.440 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:20.440 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:20.440 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:20.440 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:20.440 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:20.440 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:20.440 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:20.440 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:20.440 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:20.440 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:19:20.440 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:19:20.440 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:20.440 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:20.440 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:20.440 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:20.440 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:20.440 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:20.440 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:20.440 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:20.440 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.440 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.440 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.440 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:19:20.440 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.440 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:19:20.440 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:20.440 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:20.440 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:20.440 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:20.441 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:20.441 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:20.441 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:20.441 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:20.441 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:20.441 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:20.441 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:19:20.441 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:20.441 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:20.441 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:20.441 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:20.441 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:20.441 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:20.441 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:20.441 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:20.441 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:20.441 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:20.441 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:20.441 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:20.441 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:20.441 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:20.441 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:20.441 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:20.441 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:20.441 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:20.441 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:20.441 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:20.441 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:20.441 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:20.441 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:20.441 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:20.441 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:20.441 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:20.441 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:20.441 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:20.441 Cannot find device "nvmf_tgt_br" 00:19:20.441 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # true 00:19:20.441 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:20.441 Cannot find device "nvmf_tgt_br2" 00:19:20.441 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # true 00:19:20.441 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:20.441 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:20.441 Cannot find device "nvmf_tgt_br" 00:19:20.441 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # true 00:19:20.441 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:20.441 Cannot find device "nvmf_tgt_br2" 00:19:20.441 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # true 00:19:20.441 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:20.441 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:20.441 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:20.441 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:20.441 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:19:20.441 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:20.441 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:20.441 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:19:20.441 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:20.441 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:20.441 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:20.441 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:20.441 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:20.441 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:20.441 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:20.441 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:20.441 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:20.441 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:20.441 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:20.441 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:20.441 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:20.699 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:20.699 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:20.699 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:20.699 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:20.699 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:20.699 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:20.699 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:20.699 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:20.699 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:20.699 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:20.699 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:20.699 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:20.699 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:19:20.699 00:19:20.699 --- 10.0.0.2 ping statistics --- 00:19:20.699 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:20.699 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:19:20.699 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:20.699 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:20.699 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:19:20.699 00:19:20.699 --- 10.0.0.3 ping statistics --- 00:19:20.699 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:20.699 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:19:20.699 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:20.699 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:20.699 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:19:20.699 00:19:20.699 --- 10.0.0.1 ping statistics --- 00:19:20.699 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:20.699 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:19:20.699 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:20.699 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@433 -- # return 0 00:19:20.699 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:20.699 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:20.699 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:20.699 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:20.699 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:20.699 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:20.699 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:20.699 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:20.699 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:20.699 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:20.699 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:20.699 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=98827 00:19:20.699 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:19:20.699 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 98827 00:19:20.699 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@827 -- # '[' -z 98827 ']' 00:19:20.699 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:20.699 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:20.699 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:20.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:20.699 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:20.699 20:19:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:20.699 [2024-07-14 20:19:09.716366] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:19:20.699 [2024-07-14 20:19:09.716472] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:19:20.957 [2024-07-14 20:19:09.867545] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:20.957 [2024-07-14 20:19:09.990318] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:20.957 [2024-07-14 20:19:09.990393] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:20.957 [2024-07-14 20:19:09.990418] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:20.957 [2024-07-14 20:19:09.990429] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:20.957 [2024-07-14 20:19:09.990438] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:20.957 [2024-07-14 20:19:09.991005] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:20.957 [2024-07-14 20:19:09.991273] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:19:20.957 [2024-07-14 20:19:09.991398] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:19:20.957 [2024-07-14 20:19:09.991407] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:21.900 20:19:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:21.900 20:19:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # return 0 00:19:21.900 20:19:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:21.900 20:19:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:21.900 20:19:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:21.900 20:19:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:21.900 20:19:10 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:21.900 20:19:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.900 20:19:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:21.900 [2024-07-14 20:19:10.771833] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:21.900 20:19:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.900 20:19:10 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:21.900 20:19:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.900 20:19:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:21.900 Malloc0 00:19:21.900 20:19:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.900 20:19:10 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:21.900 20:19:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.900 20:19:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:21.900 20:19:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.900 20:19:10 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:21.900 20:19:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.900 20:19:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:21.900 20:19:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.900 20:19:10 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:21.900 20:19:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.900 20:19:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:21.900 [2024-07-14 20:19:10.811968] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:21.900 20:19:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.900 20:19:10 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:19:21.900 20:19:10 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:21.900 20:19:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:19:21.900 20:19:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:19:21.900 20:19:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:21.900 20:19:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:21.900 { 00:19:21.900 "params": { 00:19:21.900 "name": "Nvme$subsystem", 00:19:21.900 "trtype": "$TEST_TRANSPORT", 00:19:21.900 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:21.900 "adrfam": "ipv4", 00:19:21.900 "trsvcid": "$NVMF_PORT", 00:19:21.900 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:21.900 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:21.900 "hdgst": ${hdgst:-false}, 00:19:21.900 "ddgst": ${ddgst:-false} 00:19:21.900 }, 00:19:21.900 "method": "bdev_nvme_attach_controller" 00:19:21.900 } 00:19:21.900 EOF 00:19:21.900 )") 00:19:21.900 20:19:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:19:21.900 20:19:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:19:21.900 20:19:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:19:21.900 20:19:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:21.900 "params": { 00:19:21.900 "name": "Nvme1", 00:19:21.900 "trtype": "tcp", 00:19:21.900 "traddr": "10.0.0.2", 00:19:21.900 "adrfam": "ipv4", 00:19:21.900 "trsvcid": "4420", 00:19:21.900 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:21.900 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:21.900 "hdgst": false, 00:19:21.900 "ddgst": false 00:19:21.900 }, 00:19:21.900 "method": "bdev_nvme_attach_controller" 00:19:21.900 }' 00:19:21.900 [2024-07-14 20:19:10.872973] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:19:21.900 [2024-07-14 20:19:10.873098] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid98881 ] 00:19:22.199 [2024-07-14 20:19:11.017680] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:22.199 [2024-07-14 20:19:11.166402] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:22.199 [2024-07-14 20:19:11.166548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:22.199 [2024-07-14 20:19:11.166565] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:22.456 I/O targets: 00:19:22.456 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:22.456 00:19:22.456 00:19:22.456 CUnit - A unit testing framework for C - Version 2.1-3 00:19:22.456 http://cunit.sourceforge.net/ 00:19:22.456 00:19:22.456 00:19:22.456 Suite: bdevio tests on: Nvme1n1 00:19:22.456 Test: blockdev write read block ...passed 00:19:22.456 Test: blockdev write zeroes read block ...passed 00:19:22.456 Test: blockdev write zeroes read no split ...passed 00:19:22.456 Test: blockdev write zeroes read split ...passed 00:19:22.456 Test: blockdev write zeroes read split partial ...passed 00:19:22.456 Test: blockdev reset ...[2024-07-14 20:19:11.490599] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:22.456 [2024-07-14 20:19:11.490731] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aea90 (9): Bad file descriptor 00:19:22.456 [2024-07-14 20:19:11.503897] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:22.456 passed 00:19:22.456 Test: blockdev write read 8 blocks ...passed 00:19:22.456 Test: blockdev write read size > 128k ...passed 00:19:22.456 Test: blockdev write read invalid size ...passed 00:19:22.713 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:22.713 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:22.713 Test: blockdev write read max offset ...passed 00:19:22.713 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:22.713 Test: blockdev writev readv 8 blocks ...passed 00:19:22.713 Test: blockdev writev readv 30 x 1block ...passed 00:19:22.713 Test: blockdev writev readv block ...passed 00:19:22.713 Test: blockdev writev readv size > 128k ...passed 00:19:22.713 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:22.713 Test: blockdev comparev and writev ...[2024-07-14 20:19:11.677799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:22.713 [2024-07-14 20:19:11.677903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:22.713 [2024-07-14 20:19:11.677925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:22.713 [2024-07-14 20:19:11.677936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:22.713 [2024-07-14 20:19:11.678657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:22.713 [2024-07-14 20:19:11.678701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:22.713 [2024-07-14 20:19:11.678720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:22.713 [2024-07-14 20:19:11.678730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:22.713 [2024-07-14 20:19:11.679189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:22.713 [2024-07-14 20:19:11.679217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:22.713 [2024-07-14 20:19:11.679235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:22.713 [2024-07-14 20:19:11.679246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:22.713 [2024-07-14 20:19:11.679780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:22.713 [2024-07-14 20:19:11.679808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:22.713 [2024-07-14 20:19:11.679840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:22.713 [2024-07-14 20:19:11.679850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:22.713 passed 00:19:22.713 Test: blockdev nvme passthru rw ...passed 00:19:22.713 Test: blockdev nvme passthru vendor specific ...[2024-07-14 20:19:11.763337] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:22.713 [2024-07-14 20:19:11.763413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:22.713 [2024-07-14 20:19:11.763838] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:22.713 [2024-07-14 20:19:11.763876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:22.713 [2024-07-14 20:19:11.764146] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:22.713 [2024-07-14 20:19:11.764174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:22.713 [2024-07-14 20:19:11.764354] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:22.713 [2024-07-14 20:19:11.764380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:22.713 passed 00:19:22.713 Test: blockdev nvme admin passthru ...passed 00:19:22.971 Test: blockdev copy ...passed 00:19:22.971 00:19:22.971 Run Summary: Type Total Ran Passed Failed Inactive 00:19:22.971 suites 1 1 n/a 0 0 00:19:22.971 tests 23 23 23 0 0 00:19:22.971 asserts 152 152 152 0 n/a 00:19:22.971 00:19:22.971 Elapsed time = 0.923 seconds 00:19:23.229 20:19:12 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:23.229 20:19:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.229 20:19:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:23.229 20:19:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.229 20:19:12 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:23.229 20:19:12 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:19:23.229 20:19:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:23.229 20:19:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:19:23.229 20:19:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:23.229 20:19:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:19:23.229 20:19:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:23.229 20:19:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:23.229 rmmod nvme_tcp 00:19:23.229 rmmod nvme_fabrics 00:19:23.229 rmmod nvme_keyring 00:19:23.487 20:19:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:23.487 20:19:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:19:23.487 20:19:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:19:23.487 20:19:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 98827 ']' 00:19:23.487 20:19:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 98827 00:19:23.487 20:19:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@946 -- # '[' -z 98827 ']' 00:19:23.487 20:19:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # kill -0 98827 00:19:23.487 20:19:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # uname 00:19:23.487 20:19:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:23.487 20:19:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 98827 00:19:23.487 20:19:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:19:23.487 killing process with pid 98827 00:19:23.487 20:19:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:19:23.487 20:19:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # echo 'killing process with pid 98827' 00:19:23.487 20:19:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@965 -- # kill 98827 00:19:23.487 20:19:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # wait 98827 00:19:23.745 20:19:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:23.745 20:19:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:23.745 20:19:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:23.745 20:19:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:23.745 20:19:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:23.745 20:19:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:23.745 20:19:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:23.745 20:19:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:23.745 20:19:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:23.745 00:19:23.745 real 0m3.612s 00:19:23.745 user 0m12.963s 00:19:23.745 sys 0m1.434s 00:19:23.745 20:19:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:23.745 20:19:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:23.745 ************************************ 00:19:23.745 END TEST nvmf_bdevio_no_huge 00:19:23.745 ************************************ 00:19:24.004 20:19:12 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:24.004 20:19:12 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:24.004 20:19:12 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:24.004 20:19:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:24.004 ************************************ 00:19:24.004 START TEST nvmf_tls 00:19:24.004 ************************************ 00:19:24.004 20:19:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:24.004 * Looking for test storage... 00:19:24.004 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:24.004 20:19:12 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:24.004 20:19:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:19:24.004 20:19:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:24.004 20:19:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:24.004 20:19:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:24.004 20:19:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:24.004 20:19:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:24.004 20:19:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:24.004 20:19:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:24.004 20:19:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:24.004 20:19:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:24.004 20:19:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:24.004 20:19:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:19:24.004 20:19:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:19:24.004 20:19:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:24.004 20:19:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:24.004 20:19:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:24.004 20:19:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:24.004 20:19:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:24.004 20:19:12 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:24.004 20:19:12 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:24.004 20:19:12 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:24.004 20:19:12 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:24.004 20:19:12 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:24.004 20:19:12 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:24.004 20:19:12 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:19:24.004 20:19:12 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:24.004 20:19:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:19:24.004 20:19:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:24.004 20:19:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:24.004 20:19:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:24.004 20:19:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:24.004 20:19:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:24.004 20:19:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:24.004 20:19:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:24.004 20:19:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:24.004 20:19:12 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:24.004 20:19:12 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:19:24.004 20:19:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:24.004 20:19:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:24.004 20:19:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:24.004 20:19:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:24.004 20:19:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:24.004 20:19:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:24.004 20:19:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:24.004 20:19:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:24.004 20:19:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:24.004 20:19:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:24.004 20:19:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:24.004 20:19:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:24.004 20:19:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:24.004 20:19:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:24.004 20:19:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:24.004 20:19:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:24.004 20:19:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:24.004 20:19:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:24.004 20:19:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:24.004 20:19:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:24.004 20:19:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:24.004 20:19:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:24.004 20:19:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:24.004 20:19:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:24.004 20:19:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:24.004 20:19:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:24.004 20:19:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:24.004 20:19:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:24.004 Cannot find device "nvmf_tgt_br" 00:19:24.004 20:19:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # true 00:19:24.004 20:19:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:24.004 Cannot find device "nvmf_tgt_br2" 00:19:24.004 20:19:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # true 00:19:24.004 20:19:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:24.004 20:19:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:24.004 Cannot find device "nvmf_tgt_br" 00:19:24.004 20:19:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # true 00:19:24.004 20:19:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:24.004 Cannot find device "nvmf_tgt_br2" 00:19:24.004 20:19:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # true 00:19:24.004 20:19:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:24.004 20:19:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:24.004 20:19:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:24.004 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:24.004 20:19:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # true 00:19:24.004 20:19:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:24.004 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:24.004 20:19:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # true 00:19:24.004 20:19:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:24.004 20:19:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:24.004 20:19:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:24.262 20:19:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:24.262 20:19:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:24.262 20:19:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:24.262 20:19:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:24.262 20:19:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:24.262 20:19:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:24.262 20:19:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:24.262 20:19:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:24.262 20:19:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:24.262 20:19:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:24.262 20:19:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:24.262 20:19:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:24.262 20:19:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:24.263 20:19:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:24.263 20:19:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:24.263 20:19:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:24.263 20:19:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:24.263 20:19:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:24.263 20:19:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:24.263 20:19:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:24.263 20:19:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:24.263 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:24.263 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:19:24.263 00:19:24.263 --- 10.0.0.2 ping statistics --- 00:19:24.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:24.263 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:19:24.263 20:19:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:24.263 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:24.263 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.032 ms 00:19:24.263 00:19:24.263 --- 10.0.0.3 ping statistics --- 00:19:24.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:24.263 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:19:24.263 20:19:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:24.263 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:24.263 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:19:24.263 00:19:24.263 --- 10.0.0.1 ping statistics --- 00:19:24.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:24.263 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:19:24.263 20:19:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:24.263 20:19:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@433 -- # return 0 00:19:24.263 20:19:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:24.263 20:19:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:24.263 20:19:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:24.263 20:19:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:24.263 20:19:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:24.263 20:19:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:24.263 20:19:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:24.263 20:19:13 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:19:24.263 20:19:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:24.263 20:19:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:24.263 20:19:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:24.263 20:19:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=99059 00:19:24.263 20:19:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 99059 00:19:24.263 20:19:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:19:24.263 20:19:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 99059 ']' 00:19:24.263 20:19:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:24.263 20:19:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:24.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:24.263 20:19:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:24.263 20:19:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:24.263 20:19:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:24.263 [2024-07-14 20:19:13.342688] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:19:24.263 [2024-07-14 20:19:13.342800] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:24.521 [2024-07-14 20:19:13.484350] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:24.521 [2024-07-14 20:19:13.595580] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:24.521 [2024-07-14 20:19:13.595658] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:24.521 [2024-07-14 20:19:13.595686] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:24.521 [2024-07-14 20:19:13.595693] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:24.521 [2024-07-14 20:19:13.595700] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:24.521 [2024-07-14 20:19:13.595732] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:25.454 20:19:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:25.454 20:19:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:19:25.454 20:19:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:25.454 20:19:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:25.454 20:19:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:25.454 20:19:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:25.454 20:19:14 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:19:25.454 20:19:14 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:19:25.712 true 00:19:25.712 20:19:14 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:25.712 20:19:14 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:19:25.970 20:19:14 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:19:25.970 20:19:14 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:19:25.970 20:19:14 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:26.228 20:19:15 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:26.228 20:19:15 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:19:26.486 20:19:15 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:19:26.486 20:19:15 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:19:26.486 20:19:15 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:19:26.744 20:19:15 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:26.744 20:19:15 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:19:27.002 20:19:15 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:19:27.002 20:19:15 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:19:27.002 20:19:15 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:27.002 20:19:15 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:19:27.260 20:19:16 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:19:27.260 20:19:16 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:19:27.260 20:19:16 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:19:27.518 20:19:16 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:19:27.518 20:19:16 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:27.777 20:19:16 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:19:27.777 20:19:16 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:19:27.777 20:19:16 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:19:28.035 20:19:16 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:28.035 20:19:16 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:19:28.293 20:19:17 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:19:28.293 20:19:17 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:19:28.293 20:19:17 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:19:28.293 20:19:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:19:28.293 20:19:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:19:28.293 20:19:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:19:28.293 20:19:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:19:28.293 20:19:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:19:28.293 20:19:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:19:28.293 20:19:17 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:28.293 20:19:17 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:19:28.293 20:19:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:19:28.293 20:19:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:19:28.293 20:19:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:19:28.293 20:19:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:19:28.293 20:19:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:19:28.293 20:19:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:19:28.293 20:19:17 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:28.293 20:19:17 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:19:28.293 20:19:17 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.XQbYjhCcfZ 00:19:28.293 20:19:17 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:19:28.293 20:19:17 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.auH3gybfmH 00:19:28.293 20:19:17 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:28.293 20:19:17 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:28.293 20:19:17 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.XQbYjhCcfZ 00:19:28.293 20:19:17 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.auH3gybfmH 00:19:28.293 20:19:17 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:28.551 20:19:17 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:19:28.809 20:19:17 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.XQbYjhCcfZ 00:19:28.809 20:19:17 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.XQbYjhCcfZ 00:19:28.809 20:19:17 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:29.067 [2024-07-14 20:19:18.005528] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:29.067 20:19:18 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:29.325 20:19:18 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:29.583 [2024-07-14 20:19:18.433621] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:29.583 [2024-07-14 20:19:18.433877] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:29.583 20:19:18 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:29.840 malloc0 00:19:29.840 20:19:18 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:30.097 20:19:19 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.XQbYjhCcfZ 00:19:30.354 [2024-07-14 20:19:19.228457] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:30.354 20:19:19 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.XQbYjhCcfZ 00:19:42.545 Initializing NVMe Controllers 00:19:42.545 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:42.545 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:42.545 Initialization complete. Launching workers. 00:19:42.545 ======================================================== 00:19:42.545 Latency(us) 00:19:42.545 Device Information : IOPS MiB/s Average min max 00:19:42.545 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11030.28 43.09 5803.27 1521.97 7241.95 00:19:42.545 ======================================================== 00:19:42.545 Total : 11030.28 43.09 5803.27 1521.97 7241.95 00:19:42.545 00:19:42.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:42.545 20:19:29 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.XQbYjhCcfZ 00:19:42.545 20:19:29 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:42.545 20:19:29 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:42.545 20:19:29 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:42.545 20:19:29 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.XQbYjhCcfZ' 00:19:42.545 20:19:29 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:42.545 20:19:29 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=99418 00:19:42.545 20:19:29 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:42.545 20:19:29 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 99418 /var/tmp/bdevperf.sock 00:19:42.545 20:19:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 99418 ']' 00:19:42.545 20:19:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:42.545 20:19:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:42.545 20:19:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:42.545 20:19:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:42.545 20:19:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:42.545 20:19:29 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:42.545 [2024-07-14 20:19:29.489051] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:19:42.545 [2024-07-14 20:19:29.489162] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99418 ] 00:19:42.545 [2024-07-14 20:19:29.627518] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:42.545 [2024-07-14 20:19:29.750286] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:42.545 20:19:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:42.545 20:19:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:19:42.545 20:19:30 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.XQbYjhCcfZ 00:19:42.545 [2024-07-14 20:19:30.690562] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:42.545 [2024-07-14 20:19:30.690974] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:42.545 TLSTESTn1 00:19:42.545 20:19:30 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:42.545 Running I/O for 10 seconds... 00:19:52.512 00:19:52.512 Latency(us) 00:19:52.512 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:52.512 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:52.512 Verification LBA range: start 0x0 length 0x2000 00:19:52.512 TLSTESTn1 : 10.03 4454.34 17.40 0.00 0.00 28681.68 9889.98 20971.52 00:19:52.512 =================================================================================================================== 00:19:52.512 Total : 4454.34 17.40 0.00 0.00 28681.68 9889.98 20971.52 00:19:52.512 0 00:19:52.512 20:19:40 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:52.512 20:19:40 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 99418 00:19:52.512 20:19:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 99418 ']' 00:19:52.512 20:19:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 99418 00:19:52.512 20:19:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:19:52.512 20:19:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:52.512 20:19:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 99418 00:19:52.512 killing process with pid 99418 00:19:52.512 Received shutdown signal, test time was about 10.000000 seconds 00:19:52.512 00:19:52.512 Latency(us) 00:19:52.512 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:52.512 =================================================================================================================== 00:19:52.512 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:52.512 20:19:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:19:52.512 20:19:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:19:52.512 20:19:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 99418' 00:19:52.512 20:19:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 99418 00:19:52.512 [2024-07-14 20:19:40.950172] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:52.512 20:19:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 99418 00:19:52.512 20:19:41 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.auH3gybfmH 00:19:52.512 20:19:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:52.512 20:19:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.auH3gybfmH 00:19:52.512 20:19:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:52.512 20:19:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:52.512 20:19:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:52.512 20:19:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:52.512 20:19:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.auH3gybfmH 00:19:52.512 20:19:41 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:52.512 20:19:41 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:52.512 20:19:41 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:52.512 20:19:41 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.auH3gybfmH' 00:19:52.512 20:19:41 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:52.512 20:19:41 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=99564 00:19:52.512 20:19:41 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:52.512 20:19:41 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 99564 /var/tmp/bdevperf.sock 00:19:52.512 20:19:41 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:52.512 20:19:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 99564 ']' 00:19:52.512 20:19:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:52.512 20:19:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:52.512 20:19:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:52.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:52.512 20:19:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:52.512 20:19:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:52.512 [2024-07-14 20:19:41.309381] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:19:52.512 [2024-07-14 20:19:41.309757] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99564 ] 00:19:52.512 [2024-07-14 20:19:41.446691] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:52.512 [2024-07-14 20:19:41.544300] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:53.448 20:19:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:53.448 20:19:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:19:53.448 20:19:42 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.auH3gybfmH 00:19:53.449 [2024-07-14 20:19:42.497694] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:53.449 [2024-07-14 20:19:42.498048] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:53.449 [2024-07-14 20:19:42.505414] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:53.449 [2024-07-14 20:19:42.505763] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x197c8b0 (107): Transport endpoint is not connected 00:19:53.449 [2024-07-14 20:19:42.506753] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x197c8b0 (9): Bad file descriptor 00:19:53.449 [2024-07-14 20:19:42.507750] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:53.449 [2024-07-14 20:19:42.507773] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:53.449 [2024-07-14 20:19:42.507786] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:53.449 2024/07/14 20:19:42 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/tmp/tmp.auH3gybfmH subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:19:53.449 request: 00:19:53.449 { 00:19:53.449 "method": "bdev_nvme_attach_controller", 00:19:53.449 "params": { 00:19:53.449 "name": "TLSTEST", 00:19:53.449 "trtype": "tcp", 00:19:53.449 "traddr": "10.0.0.2", 00:19:53.449 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:53.449 "adrfam": "ipv4", 00:19:53.449 "trsvcid": "4420", 00:19:53.449 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:53.449 "psk": "/tmp/tmp.auH3gybfmH" 00:19:53.449 } 00:19:53.449 } 00:19:53.449 Got JSON-RPC error response 00:19:53.449 GoRPCClient: error on JSON-RPC call 00:19:53.449 20:19:42 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 99564 00:19:53.449 20:19:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 99564 ']' 00:19:53.449 20:19:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 99564 00:19:53.449 20:19:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:19:53.706 20:19:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:53.706 20:19:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 99564 00:19:53.706 killing process with pid 99564 00:19:53.706 Received shutdown signal, test time was about 10.000000 seconds 00:19:53.706 00:19:53.706 Latency(us) 00:19:53.706 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:53.706 =================================================================================================================== 00:19:53.706 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:53.706 20:19:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:19:53.706 20:19:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:19:53.706 20:19:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 99564' 00:19:53.706 20:19:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 99564 00:19:53.706 [2024-07-14 20:19:42.558254] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:53.706 20:19:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 99564 00:19:53.964 20:19:42 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:53.964 20:19:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:53.964 20:19:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:53.964 20:19:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:53.964 20:19:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:53.964 20:19:42 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.XQbYjhCcfZ 00:19:53.964 20:19:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:53.964 20:19:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.XQbYjhCcfZ 00:19:53.964 20:19:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:53.964 20:19:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:53.964 20:19:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:53.964 20:19:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:53.964 20:19:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.XQbYjhCcfZ 00:19:53.964 20:19:42 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:53.964 20:19:42 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:53.964 20:19:42 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:19:53.964 20:19:42 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.XQbYjhCcfZ' 00:19:53.964 20:19:42 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:53.964 20:19:42 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=99610 00:19:53.964 20:19:42 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:53.964 20:19:42 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 99610 /var/tmp/bdevperf.sock 00:19:53.964 20:19:42 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:53.964 20:19:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 99610 ']' 00:19:53.964 20:19:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:53.964 20:19:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:53.964 20:19:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:53.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:53.964 20:19:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:53.964 20:19:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:53.964 [2024-07-14 20:19:42.894581] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:19:53.964 [2024-07-14 20:19:42.895527] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99610 ] 00:19:53.964 [2024-07-14 20:19:43.028177] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:54.221 [2024-07-14 20:19:43.111164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:54.786 20:19:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:54.786 20:19:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:19:54.786 20:19:43 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.XQbYjhCcfZ 00:19:55.043 [2024-07-14 20:19:44.028556] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:55.043 [2024-07-14 20:19:44.028657] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:55.043 [2024-07-14 20:19:44.035445] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:55.043 [2024-07-14 20:19:44.035476] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:55.043 [2024-07-14 20:19:44.035522] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:55.043 [2024-07-14 20:19:44.036367] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189f8b0 (107): Transport endpoint is not connected 00:19:55.043 [2024-07-14 20:19:44.037365] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189f8b0 (9): Bad file descriptor 00:19:55.043 [2024-07-14 20:19:44.038362] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:55.043 [2024-07-14 20:19:44.038379] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:55.043 [2024-07-14 20:19:44.038392] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:55.043 2024/07/14 20:19:44 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST psk:/tmp/tmp.XQbYjhCcfZ subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:19:55.043 request: 00:19:55.043 { 00:19:55.043 "method": "bdev_nvme_attach_controller", 00:19:55.043 "params": { 00:19:55.043 "name": "TLSTEST", 00:19:55.043 "trtype": "tcp", 00:19:55.043 "traddr": "10.0.0.2", 00:19:55.043 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:55.043 "adrfam": "ipv4", 00:19:55.043 "trsvcid": "4420", 00:19:55.043 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:55.043 "psk": "/tmp/tmp.XQbYjhCcfZ" 00:19:55.043 } 00:19:55.043 } 00:19:55.043 Got JSON-RPC error response 00:19:55.043 GoRPCClient: error on JSON-RPC call 00:19:55.043 20:19:44 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 99610 00:19:55.043 20:19:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 99610 ']' 00:19:55.043 20:19:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 99610 00:19:55.043 20:19:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:19:55.043 20:19:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:55.043 20:19:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 99610 00:19:55.043 20:19:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:19:55.043 20:19:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:19:55.043 killing process with pid 99610 00:19:55.043 Received shutdown signal, test time was about 10.000000 seconds 00:19:55.043 00:19:55.043 Latency(us) 00:19:55.043 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:55.043 =================================================================================================================== 00:19:55.043 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:55.043 20:19:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 99610' 00:19:55.043 20:19:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 99610 00:19:55.043 [2024-07-14 20:19:44.085236] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:55.043 20:19:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 99610 00:19:55.301 20:19:44 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:55.301 20:19:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:55.301 20:19:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:55.301 20:19:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:55.301 20:19:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:55.301 20:19:44 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.XQbYjhCcfZ 00:19:55.301 20:19:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:55.301 20:19:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.XQbYjhCcfZ 00:19:55.301 20:19:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:55.301 20:19:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:55.301 20:19:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:55.301 20:19:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:55.301 20:19:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.XQbYjhCcfZ 00:19:55.301 20:19:44 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:55.301 20:19:44 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:19:55.301 20:19:44 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:55.301 20:19:44 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.XQbYjhCcfZ' 00:19:55.301 20:19:44 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:55.301 20:19:44 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=99650 00:19:55.301 20:19:44 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:55.301 20:19:44 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:55.301 20:19:44 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 99650 /var/tmp/bdevperf.sock 00:19:55.301 20:19:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 99650 ']' 00:19:55.301 20:19:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:55.301 20:19:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:55.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:55.301 20:19:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:55.301 20:19:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:55.301 20:19:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:55.559 [2024-07-14 20:19:44.423264] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:19:55.559 [2024-07-14 20:19:44.423358] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99650 ] 00:19:55.559 [2024-07-14 20:19:44.564008] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:55.817 [2024-07-14 20:19:44.663410] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:56.381 20:19:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:56.381 20:19:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:19:56.381 20:19:45 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.XQbYjhCcfZ 00:19:56.640 [2024-07-14 20:19:45.559028] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:56.640 [2024-07-14 20:19:45.559137] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:56.640 [2024-07-14 20:19:45.565475] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:56.640 [2024-07-14 20:19:45.565507] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:56.640 [2024-07-14 20:19:45.565555] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:56.640 [2024-07-14 20:19:45.565745] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8bf8b0 (107): Transport endpoint is not connected 00:19:56.640 [2024-07-14 20:19:45.566735] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8bf8b0 (9): Bad file descriptor 00:19:56.640 [2024-07-14 20:19:45.567732] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:19:56.640 [2024-07-14 20:19:45.567752] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:56.640 [2024-07-14 20:19:45.567765] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:19:56.640 2024/07/14 20:19:45 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/tmp/tmp.XQbYjhCcfZ subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:19:56.640 request: 00:19:56.640 { 00:19:56.640 "method": "bdev_nvme_attach_controller", 00:19:56.640 "params": { 00:19:56.640 "name": "TLSTEST", 00:19:56.640 "trtype": "tcp", 00:19:56.640 "traddr": "10.0.0.2", 00:19:56.640 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:56.640 "adrfam": "ipv4", 00:19:56.640 "trsvcid": "4420", 00:19:56.640 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:56.640 "psk": "/tmp/tmp.XQbYjhCcfZ" 00:19:56.640 } 00:19:56.640 } 00:19:56.640 Got JSON-RPC error response 00:19:56.640 GoRPCClient: error on JSON-RPC call 00:19:56.640 20:19:45 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 99650 00:19:56.640 20:19:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 99650 ']' 00:19:56.640 20:19:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 99650 00:19:56.640 20:19:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:19:56.640 20:19:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:56.640 20:19:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 99650 00:19:56.640 20:19:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:19:56.640 20:19:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:19:56.640 killing process with pid 99650 00:19:56.640 20:19:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 99650' 00:19:56.640 Received shutdown signal, test time was about 10.000000 seconds 00:19:56.640 00:19:56.640 Latency(us) 00:19:56.640 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:56.640 =================================================================================================================== 00:19:56.640 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:56.640 20:19:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 99650 00:19:56.640 [2024-07-14 20:19:45.614118] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:56.640 20:19:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 99650 00:19:56.898 20:19:45 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:56.898 20:19:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:56.899 20:19:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:56.899 20:19:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:56.899 20:19:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:56.899 20:19:45 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:56.899 20:19:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:56.899 20:19:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:56.899 20:19:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:56.899 20:19:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:56.899 20:19:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:56.899 20:19:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:56.899 20:19:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:56.899 20:19:45 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:56.899 20:19:45 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:56.899 20:19:45 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:56.899 20:19:45 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:19:56.899 20:19:45 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:56.899 20:19:45 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=99700 00:19:56.899 20:19:45 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:56.899 20:19:45 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:56.899 20:19:45 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 99700 /var/tmp/bdevperf.sock 00:19:56.899 20:19:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 99700 ']' 00:19:56.899 20:19:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:56.899 20:19:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:56.899 20:19:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:56.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:56.899 20:19:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:56.899 20:19:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:56.899 [2024-07-14 20:19:45.953770] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:19:56.899 [2024-07-14 20:19:45.953887] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99700 ] 00:19:57.157 [2024-07-14 20:19:46.093280] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:57.158 [2024-07-14 20:19:46.164958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:58.093 20:19:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:58.093 20:19:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:19:58.093 20:19:46 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:58.093 [2024-07-14 20:19:47.105814] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:58.093 [2024-07-14 20:19:47.107108] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1656620 (9): Bad file descriptor 00:19:58.093 [2024-07-14 20:19:47.108101] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:58.093 [2024-07-14 20:19:47.108116] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:58.093 [2024-07-14 20:19:47.108130] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:58.093 2024/07/14 20:19:47 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:19:58.093 request: 00:19:58.093 { 00:19:58.093 "method": "bdev_nvme_attach_controller", 00:19:58.093 "params": { 00:19:58.093 "name": "TLSTEST", 00:19:58.093 "trtype": "tcp", 00:19:58.093 "traddr": "10.0.0.2", 00:19:58.093 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:58.093 "adrfam": "ipv4", 00:19:58.093 "trsvcid": "4420", 00:19:58.093 "subnqn": "nqn.2016-06.io.spdk:cnode1" 00:19:58.093 } 00:19:58.093 } 00:19:58.093 Got JSON-RPC error response 00:19:58.093 GoRPCClient: error on JSON-RPC call 00:19:58.093 20:19:47 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 99700 00:19:58.093 20:19:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 99700 ']' 00:19:58.093 20:19:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 99700 00:19:58.093 20:19:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:19:58.093 20:19:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:58.093 20:19:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 99700 00:19:58.093 20:19:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:19:58.093 20:19:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:19:58.093 20:19:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 99700' 00:19:58.093 killing process with pid 99700 00:19:58.093 20:19:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 99700 00:19:58.093 Received shutdown signal, test time was about 10.000000 seconds 00:19:58.093 00:19:58.093 Latency(us) 00:19:58.093 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:58.093 =================================================================================================================== 00:19:58.093 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:58.093 20:19:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 99700 00:19:58.662 20:19:47 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:58.662 20:19:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:58.662 20:19:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:58.662 20:19:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:58.662 20:19:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:58.662 20:19:47 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 99059 00:19:58.662 20:19:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 99059 ']' 00:19:58.662 20:19:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 99059 00:19:58.662 20:19:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:19:58.662 20:19:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:58.662 20:19:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 99059 00:19:58.662 20:19:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:19:58.662 20:19:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:19:58.662 killing process with pid 99059 00:19:58.662 20:19:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 99059' 00:19:58.662 20:19:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 99059 00:19:58.662 [2024-07-14 20:19:47.471424] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:58.662 20:19:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 99059 00:19:58.921 20:19:47 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:19:58.921 20:19:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:19:58.921 20:19:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:19:58.921 20:19:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:19:58.921 20:19:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:19:58.921 20:19:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:19:58.921 20:19:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:19:58.921 20:19:47 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:58.921 20:19:47 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:19:58.921 20:19:47 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.GktrxbLTTF 00:19:58.921 20:19:47 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:58.921 20:19:47 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.GktrxbLTTF 00:19:58.921 20:19:47 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:19:58.921 20:19:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:58.921 20:19:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:58.921 20:19:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:58.921 20:19:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=99757 00:19:58.921 20:19:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 99757 00:19:58.921 20:19:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 99757 ']' 00:19:58.921 20:19:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:58.921 20:19:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:58.921 20:19:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:58.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:58.921 20:19:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:58.921 20:19:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:58.921 20:19:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:58.921 [2024-07-14 20:19:47.909676] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:19:58.921 [2024-07-14 20:19:47.909758] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:59.180 [2024-07-14 20:19:48.043583] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:59.180 [2024-07-14 20:19:48.149999] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:59.180 [2024-07-14 20:19:48.150071] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:59.180 [2024-07-14 20:19:48.150082] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:59.180 [2024-07-14 20:19:48.150098] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:59.180 [2024-07-14 20:19:48.150105] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:59.180 [2024-07-14 20:19:48.150135] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:00.119 20:19:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:00.119 20:19:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:00.119 20:19:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:00.119 20:19:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:00.119 20:19:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:00.119 20:19:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:00.119 20:19:48 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.GktrxbLTTF 00:20:00.119 20:19:48 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.GktrxbLTTF 00:20:00.119 20:19:48 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:00.119 [2024-07-14 20:19:49.166584] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:00.119 20:19:49 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:00.377 20:19:49 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:00.636 [2024-07-14 20:19:49.638677] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:00.636 [2024-07-14 20:19:49.639040] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:00.636 20:19:49 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:00.895 malloc0 00:20:00.895 20:19:49 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:01.153 20:19:50 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.GktrxbLTTF 00:20:01.412 [2024-07-14 20:19:50.341801] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:01.412 20:19:50 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.GktrxbLTTF 00:20:01.412 20:19:50 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:01.412 20:19:50 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:01.412 20:19:50 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:01.412 20:19:50 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.GktrxbLTTF' 00:20:01.412 20:19:50 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:01.412 20:19:50 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=99854 00:20:01.412 20:19:50 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:01.412 20:19:50 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:01.412 20:19:50 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 99854 /var/tmp/bdevperf.sock 00:20:01.412 20:19:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 99854 ']' 00:20:01.412 20:19:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:01.412 20:19:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:01.412 20:19:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:01.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:01.412 20:19:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:01.412 20:19:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:01.412 [2024-07-14 20:19:50.418389] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:20:01.412 [2024-07-14 20:19:50.418531] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99854 ] 00:20:01.671 [2024-07-14 20:19:50.558539] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:01.671 [2024-07-14 20:19:50.650399] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:02.239 20:19:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:02.239 20:19:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:02.239 20:19:51 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.GktrxbLTTF 00:20:02.497 [2024-07-14 20:19:51.522801] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:02.497 [2024-07-14 20:19:51.522995] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:02.756 TLSTESTn1 00:20:02.756 20:19:51 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:02.756 Running I/O for 10 seconds... 00:20:12.730 00:20:12.730 Latency(us) 00:20:12.730 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:12.730 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:12.730 Verification LBA range: start 0x0 length 0x2000 00:20:12.730 TLSTESTn1 : 10.03 4491.63 17.55 0.00 0.00 28439.94 9175.04 20256.58 00:20:12.730 =================================================================================================================== 00:20:12.730 Total : 4491.63 17.55 0.00 0.00 28439.94 9175.04 20256.58 00:20:12.730 0 00:20:12.730 20:20:01 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:12.730 20:20:01 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 99854 00:20:12.730 20:20:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 99854 ']' 00:20:12.730 20:20:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 99854 00:20:12.730 20:20:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:12.730 20:20:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:12.731 20:20:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 99854 00:20:12.731 20:20:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:20:12.731 20:20:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:20:12.731 killing process with pid 99854 00:20:12.731 20:20:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 99854' 00:20:12.731 Received shutdown signal, test time was about 10.000000 seconds 00:20:12.731 00:20:12.731 Latency(us) 00:20:12.731 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:12.731 =================================================================================================================== 00:20:12.731 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:12.731 20:20:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 99854 00:20:12.731 [2024-07-14 20:20:01.784049] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:12.731 20:20:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 99854 00:20:12.989 20:20:02 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.GktrxbLTTF 00:20:12.989 20:20:02 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.GktrxbLTTF 00:20:12.989 20:20:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:12.989 20:20:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.GktrxbLTTF 00:20:12.989 20:20:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:12.989 20:20:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:12.989 20:20:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:12.989 20:20:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:12.989 20:20:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.GktrxbLTTF 00:20:12.989 20:20:02 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:12.989 20:20:02 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:12.989 20:20:02 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:12.989 20:20:02 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.GktrxbLTTF' 00:20:12.989 20:20:02 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:12.989 20:20:02 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=100007 00:20:12.989 20:20:02 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:13.248 20:20:02 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:13.248 20:20:02 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 100007 /var/tmp/bdevperf.sock 00:20:13.248 20:20:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 100007 ']' 00:20:13.248 20:20:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:13.248 20:20:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:13.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:13.248 20:20:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:13.248 20:20:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:13.248 20:20:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:13.248 [2024-07-14 20:20:02.119962] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:20:13.248 [2024-07-14 20:20:02.120067] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100007 ] 00:20:13.248 [2024-07-14 20:20:02.254605] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:13.506 [2024-07-14 20:20:02.339901] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:14.072 20:20:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:14.072 20:20:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:14.072 20:20:03 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.GktrxbLTTF 00:20:14.330 [2024-07-14 20:20:03.318397] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:14.330 [2024-07-14 20:20:03.318464] bdev_nvme.c:6122:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:14.330 [2024-07-14 20:20:03.318475] bdev_nvme.c:6231:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.GktrxbLTTF 00:20:14.330 2024/07/14 20:20:03 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/tmp/tmp.GktrxbLTTF subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-1 Msg=Operation not permitted 00:20:14.330 request: 00:20:14.330 { 00:20:14.330 "method": "bdev_nvme_attach_controller", 00:20:14.330 "params": { 00:20:14.330 "name": "TLSTEST", 00:20:14.330 "trtype": "tcp", 00:20:14.330 "traddr": "10.0.0.2", 00:20:14.330 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:14.330 "adrfam": "ipv4", 00:20:14.330 "trsvcid": "4420", 00:20:14.330 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:14.330 "psk": "/tmp/tmp.GktrxbLTTF" 00:20:14.330 } 00:20:14.330 } 00:20:14.330 Got JSON-RPC error response 00:20:14.330 GoRPCClient: error on JSON-RPC call 00:20:14.330 20:20:03 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 100007 00:20:14.330 20:20:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 100007 ']' 00:20:14.330 20:20:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 100007 00:20:14.330 20:20:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:14.330 20:20:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:14.330 20:20:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 100007 00:20:14.330 20:20:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:20:14.330 killing process with pid 100007 00:20:14.330 20:20:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:20:14.330 20:20:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 100007' 00:20:14.330 20:20:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 100007 00:20:14.330 Received shutdown signal, test time was about 10.000000 seconds 00:20:14.330 00:20:14.330 Latency(us) 00:20:14.330 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:14.330 =================================================================================================================== 00:20:14.330 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:14.330 20:20:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 100007 00:20:14.588 20:20:03 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:14.588 20:20:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:14.588 20:20:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:14.588 20:20:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:14.588 20:20:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:14.588 20:20:03 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 99757 00:20:14.588 20:20:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 99757 ']' 00:20:14.588 20:20:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 99757 00:20:14.588 20:20:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:14.588 20:20:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:14.588 20:20:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 99757 00:20:14.588 20:20:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:20:14.588 20:20:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:20:14.588 killing process with pid 99757 00:20:14.588 20:20:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 99757' 00:20:14.588 20:20:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 99757 00:20:14.588 [2024-07-14 20:20:03.669382] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:14.588 20:20:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 99757 00:20:15.156 20:20:03 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:20:15.156 20:20:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:15.156 20:20:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:15.156 20:20:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:15.156 20:20:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=100062 00:20:15.156 20:20:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:15.156 20:20:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 100062 00:20:15.156 20:20:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 100062 ']' 00:20:15.156 20:20:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:15.156 20:20:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:15.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:15.156 20:20:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:15.156 20:20:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:15.156 20:20:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:15.156 [2024-07-14 20:20:04.039648] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:20:15.156 [2024-07-14 20:20:04.039729] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:15.156 [2024-07-14 20:20:04.179218] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:15.415 [2024-07-14 20:20:04.253277] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:15.415 [2024-07-14 20:20:04.253329] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:15.415 [2024-07-14 20:20:04.253339] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:15.415 [2024-07-14 20:20:04.253346] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:15.415 [2024-07-14 20:20:04.253353] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:15.415 [2024-07-14 20:20:04.253378] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:15.982 20:20:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:15.982 20:20:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:15.982 20:20:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:15.982 20:20:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:15.982 20:20:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:15.982 20:20:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:15.982 20:20:05 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.GktrxbLTTF 00:20:15.982 20:20:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:15.982 20:20:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.GktrxbLTTF 00:20:15.982 20:20:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:20:15.982 20:20:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:15.982 20:20:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:20:15.982 20:20:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:15.982 20:20:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.GktrxbLTTF 00:20:15.982 20:20:05 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.GktrxbLTTF 00:20:15.982 20:20:05 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:16.241 [2024-07-14 20:20:05.300689] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:16.241 20:20:05 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:16.501 20:20:05 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:16.759 [2024-07-14 20:20:05.744735] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:16.760 [2024-07-14 20:20:05.745836] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:16.760 20:20:05 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:17.017 malloc0 00:20:17.017 20:20:05 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:17.275 20:20:06 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.GktrxbLTTF 00:20:17.534 [2024-07-14 20:20:06.503955] tcp.c:3575:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:17.534 [2024-07-14 20:20:06.504006] tcp.c:3661:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:20:17.534 [2024-07-14 20:20:06.504056] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:20:17.534 2024/07/14 20:20:06 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:/tmp/tmp.GktrxbLTTF], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error 00:20:17.534 request: 00:20:17.534 { 00:20:17.534 "method": "nvmf_subsystem_add_host", 00:20:17.534 "params": { 00:20:17.534 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:17.534 "host": "nqn.2016-06.io.spdk:host1", 00:20:17.534 "psk": "/tmp/tmp.GktrxbLTTF" 00:20:17.534 } 00:20:17.534 } 00:20:17.534 Got JSON-RPC error response 00:20:17.534 GoRPCClient: error on JSON-RPC call 00:20:17.534 20:20:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:17.534 20:20:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:17.534 20:20:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:17.534 20:20:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:17.534 20:20:06 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 100062 00:20:17.534 20:20:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 100062 ']' 00:20:17.534 20:20:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 100062 00:20:17.534 20:20:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:17.534 20:20:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:17.534 20:20:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 100062 00:20:17.534 killing process with pid 100062 00:20:17.534 20:20:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:20:17.535 20:20:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:20:17.535 20:20:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 100062' 00:20:17.535 20:20:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 100062 00:20:17.535 20:20:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 100062 00:20:17.793 20:20:06 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.GktrxbLTTF 00:20:17.793 20:20:06 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:20:17.793 20:20:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:17.793 20:20:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:17.793 20:20:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:17.793 20:20:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=100169 00:20:17.793 20:20:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:17.793 20:20:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 100169 00:20:17.793 20:20:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 100169 ']' 00:20:17.793 20:20:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:17.793 20:20:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:17.793 20:20:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:17.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:17.793 20:20:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:17.793 20:20:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:18.052 [2024-07-14 20:20:06.887774] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:20:18.052 [2024-07-14 20:20:06.887843] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:18.052 [2024-07-14 20:20:07.020792] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:18.052 [2024-07-14 20:20:07.098528] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:18.052 [2024-07-14 20:20:07.098593] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:18.052 [2024-07-14 20:20:07.098619] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:18.052 [2024-07-14 20:20:07.098627] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:18.052 [2024-07-14 20:20:07.098633] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:18.052 [2024-07-14 20:20:07.098672] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:18.985 20:20:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:18.985 20:20:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:18.985 20:20:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:18.985 20:20:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:18.985 20:20:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:18.985 20:20:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:18.985 20:20:07 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.GktrxbLTTF 00:20:18.985 20:20:07 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.GktrxbLTTF 00:20:18.985 20:20:07 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:19.242 [2024-07-14 20:20:08.091331] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:19.242 20:20:08 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:19.499 20:20:08 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:19.757 [2024-07-14 20:20:08.627453] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:19.757 [2024-07-14 20:20:08.627709] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:19.757 20:20:08 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:20.016 malloc0 00:20:20.016 20:20:08 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:20.275 20:20:09 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.GktrxbLTTF 00:20:20.275 [2024-07-14 20:20:09.334458] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:20.275 20:20:09 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=100271 00:20:20.275 20:20:09 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:20.534 20:20:09 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:20.534 20:20:09 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 100271 /var/tmp/bdevperf.sock 00:20:20.534 20:20:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 100271 ']' 00:20:20.534 20:20:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:20.534 20:20:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:20.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:20.534 20:20:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:20.534 20:20:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:20.534 20:20:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:20.534 [2024-07-14 20:20:09.400970] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:20:20.534 [2024-07-14 20:20:09.401050] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100271 ] 00:20:20.534 [2024-07-14 20:20:09.536955] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:20.792 [2024-07-14 20:20:09.649994] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:21.359 20:20:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:21.359 20:20:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:21.359 20:20:10 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.GktrxbLTTF 00:20:21.618 [2024-07-14 20:20:10.544640] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:21.618 [2024-07-14 20:20:10.545069] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:21.618 TLSTESTn1 00:20:21.618 20:20:10 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:20:22.187 20:20:10 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:20:22.187 "subsystems": [ 00:20:22.187 { 00:20:22.187 "subsystem": "keyring", 00:20:22.187 "config": [] 00:20:22.187 }, 00:20:22.187 { 00:20:22.187 "subsystem": "iobuf", 00:20:22.187 "config": [ 00:20:22.187 { 00:20:22.187 "method": "iobuf_set_options", 00:20:22.187 "params": { 00:20:22.187 "large_bufsize": 135168, 00:20:22.187 "large_pool_count": 1024, 00:20:22.187 "small_bufsize": 8192, 00:20:22.187 "small_pool_count": 8192 00:20:22.187 } 00:20:22.187 } 00:20:22.187 ] 00:20:22.187 }, 00:20:22.187 { 00:20:22.187 "subsystem": "sock", 00:20:22.187 "config": [ 00:20:22.187 { 00:20:22.187 "method": "sock_set_default_impl", 00:20:22.187 "params": { 00:20:22.187 "impl_name": "posix" 00:20:22.187 } 00:20:22.187 }, 00:20:22.187 { 00:20:22.187 "method": "sock_impl_set_options", 00:20:22.187 "params": { 00:20:22.187 "enable_ktls": false, 00:20:22.188 "enable_placement_id": 0, 00:20:22.188 "enable_quickack": false, 00:20:22.188 "enable_recv_pipe": true, 00:20:22.188 "enable_zerocopy_send_client": false, 00:20:22.188 "enable_zerocopy_send_server": true, 00:20:22.188 "impl_name": "ssl", 00:20:22.188 "recv_buf_size": 4096, 00:20:22.188 "send_buf_size": 4096, 00:20:22.188 "tls_version": 0, 00:20:22.188 "zerocopy_threshold": 0 00:20:22.188 } 00:20:22.188 }, 00:20:22.188 { 00:20:22.188 "method": "sock_impl_set_options", 00:20:22.188 "params": { 00:20:22.188 "enable_ktls": false, 00:20:22.188 "enable_placement_id": 0, 00:20:22.188 "enable_quickack": false, 00:20:22.188 "enable_recv_pipe": true, 00:20:22.188 "enable_zerocopy_send_client": false, 00:20:22.188 "enable_zerocopy_send_server": true, 00:20:22.188 "impl_name": "posix", 00:20:22.188 "recv_buf_size": 2097152, 00:20:22.188 "send_buf_size": 2097152, 00:20:22.188 "tls_version": 0, 00:20:22.188 "zerocopy_threshold": 0 00:20:22.188 } 00:20:22.188 } 00:20:22.188 ] 00:20:22.188 }, 00:20:22.188 { 00:20:22.188 "subsystem": "vmd", 00:20:22.188 "config": [] 00:20:22.188 }, 00:20:22.188 { 00:20:22.188 "subsystem": "accel", 00:20:22.188 "config": [ 00:20:22.188 { 00:20:22.188 "method": "accel_set_options", 00:20:22.188 "params": { 00:20:22.188 "buf_count": 2048, 00:20:22.188 "large_cache_size": 16, 00:20:22.188 "sequence_count": 2048, 00:20:22.188 "small_cache_size": 128, 00:20:22.188 "task_count": 2048 00:20:22.188 } 00:20:22.188 } 00:20:22.188 ] 00:20:22.188 }, 00:20:22.188 { 00:20:22.188 "subsystem": "bdev", 00:20:22.188 "config": [ 00:20:22.188 { 00:20:22.188 "method": "bdev_set_options", 00:20:22.188 "params": { 00:20:22.188 "bdev_auto_examine": true, 00:20:22.188 "bdev_io_cache_size": 256, 00:20:22.188 "bdev_io_pool_size": 65535, 00:20:22.188 "iobuf_large_cache_size": 16, 00:20:22.188 "iobuf_small_cache_size": 128 00:20:22.188 } 00:20:22.188 }, 00:20:22.188 { 00:20:22.188 "method": "bdev_raid_set_options", 00:20:22.188 "params": { 00:20:22.188 "process_window_size_kb": 1024 00:20:22.188 } 00:20:22.188 }, 00:20:22.188 { 00:20:22.188 "method": "bdev_iscsi_set_options", 00:20:22.188 "params": { 00:20:22.188 "timeout_sec": 30 00:20:22.188 } 00:20:22.188 }, 00:20:22.188 { 00:20:22.188 "method": "bdev_nvme_set_options", 00:20:22.188 "params": { 00:20:22.188 "action_on_timeout": "none", 00:20:22.188 "allow_accel_sequence": false, 00:20:22.188 "arbitration_burst": 0, 00:20:22.188 "bdev_retry_count": 3, 00:20:22.188 "ctrlr_loss_timeout_sec": 0, 00:20:22.188 "delay_cmd_submit": true, 00:20:22.188 "dhchap_dhgroups": [ 00:20:22.188 "null", 00:20:22.188 "ffdhe2048", 00:20:22.188 "ffdhe3072", 00:20:22.188 "ffdhe4096", 00:20:22.188 "ffdhe6144", 00:20:22.188 "ffdhe8192" 00:20:22.188 ], 00:20:22.188 "dhchap_digests": [ 00:20:22.188 "sha256", 00:20:22.188 "sha384", 00:20:22.188 "sha512" 00:20:22.188 ], 00:20:22.188 "disable_auto_failback": false, 00:20:22.188 "fast_io_fail_timeout_sec": 0, 00:20:22.188 "generate_uuids": false, 00:20:22.188 "high_priority_weight": 0, 00:20:22.188 "io_path_stat": false, 00:20:22.188 "io_queue_requests": 0, 00:20:22.188 "keep_alive_timeout_ms": 10000, 00:20:22.188 "low_priority_weight": 0, 00:20:22.188 "medium_priority_weight": 0, 00:20:22.188 "nvme_adminq_poll_period_us": 10000, 00:20:22.188 "nvme_error_stat": false, 00:20:22.188 "nvme_ioq_poll_period_us": 0, 00:20:22.188 "rdma_cm_event_timeout_ms": 0, 00:20:22.188 "rdma_max_cq_size": 0, 00:20:22.188 "rdma_srq_size": 0, 00:20:22.188 "reconnect_delay_sec": 0, 00:20:22.188 "timeout_admin_us": 0, 00:20:22.188 "timeout_us": 0, 00:20:22.188 "transport_ack_timeout": 0, 00:20:22.188 "transport_retry_count": 4, 00:20:22.188 "transport_tos": 0 00:20:22.188 } 00:20:22.188 }, 00:20:22.188 { 00:20:22.188 "method": "bdev_nvme_set_hotplug", 00:20:22.188 "params": { 00:20:22.188 "enable": false, 00:20:22.188 "period_us": 100000 00:20:22.188 } 00:20:22.188 }, 00:20:22.188 { 00:20:22.188 "method": "bdev_malloc_create", 00:20:22.188 "params": { 00:20:22.188 "block_size": 4096, 00:20:22.188 "name": "malloc0", 00:20:22.188 "num_blocks": 8192, 00:20:22.188 "optimal_io_boundary": 0, 00:20:22.188 "physical_block_size": 4096, 00:20:22.188 "uuid": "b98602d5-8e7b-402f-b2a5-a4407611e6d2" 00:20:22.188 } 00:20:22.188 }, 00:20:22.188 { 00:20:22.188 "method": "bdev_wait_for_examine" 00:20:22.188 } 00:20:22.188 ] 00:20:22.188 }, 00:20:22.188 { 00:20:22.188 "subsystem": "nbd", 00:20:22.188 "config": [] 00:20:22.188 }, 00:20:22.188 { 00:20:22.188 "subsystem": "scheduler", 00:20:22.188 "config": [ 00:20:22.188 { 00:20:22.188 "method": "framework_set_scheduler", 00:20:22.188 "params": { 00:20:22.188 "name": "static" 00:20:22.188 } 00:20:22.188 } 00:20:22.188 ] 00:20:22.188 }, 00:20:22.188 { 00:20:22.188 "subsystem": "nvmf", 00:20:22.188 "config": [ 00:20:22.188 { 00:20:22.188 "method": "nvmf_set_config", 00:20:22.188 "params": { 00:20:22.188 "admin_cmd_passthru": { 00:20:22.188 "identify_ctrlr": false 00:20:22.188 }, 00:20:22.188 "discovery_filter": "match_any" 00:20:22.188 } 00:20:22.188 }, 00:20:22.188 { 00:20:22.188 "method": "nvmf_set_max_subsystems", 00:20:22.188 "params": { 00:20:22.188 "max_subsystems": 1024 00:20:22.188 } 00:20:22.188 }, 00:20:22.188 { 00:20:22.188 "method": "nvmf_set_crdt", 00:20:22.188 "params": { 00:20:22.188 "crdt1": 0, 00:20:22.188 "crdt2": 0, 00:20:22.188 "crdt3": 0 00:20:22.188 } 00:20:22.188 }, 00:20:22.188 { 00:20:22.188 "method": "nvmf_create_transport", 00:20:22.188 "params": { 00:20:22.188 "abort_timeout_sec": 1, 00:20:22.188 "ack_timeout": 0, 00:20:22.188 "buf_cache_size": 4294967295, 00:20:22.188 "c2h_success": false, 00:20:22.188 "data_wr_pool_size": 0, 00:20:22.188 "dif_insert_or_strip": false, 00:20:22.188 "in_capsule_data_size": 4096, 00:20:22.188 "io_unit_size": 131072, 00:20:22.188 "max_aq_depth": 128, 00:20:22.188 "max_io_qpairs_per_ctrlr": 127, 00:20:22.188 "max_io_size": 131072, 00:20:22.188 "max_queue_depth": 128, 00:20:22.188 "num_shared_buffers": 511, 00:20:22.188 "sock_priority": 0, 00:20:22.188 "trtype": "TCP", 00:20:22.188 "zcopy": false 00:20:22.188 } 00:20:22.188 }, 00:20:22.188 { 00:20:22.188 "method": "nvmf_create_subsystem", 00:20:22.188 "params": { 00:20:22.188 "allow_any_host": false, 00:20:22.188 "ana_reporting": false, 00:20:22.188 "max_cntlid": 65519, 00:20:22.188 "max_namespaces": 10, 00:20:22.188 "min_cntlid": 1, 00:20:22.188 "model_number": "SPDK bdev Controller", 00:20:22.188 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:22.188 "serial_number": "SPDK00000000000001" 00:20:22.188 } 00:20:22.188 }, 00:20:22.188 { 00:20:22.188 "method": "nvmf_subsystem_add_host", 00:20:22.188 "params": { 00:20:22.188 "host": "nqn.2016-06.io.spdk:host1", 00:20:22.188 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:22.188 "psk": "/tmp/tmp.GktrxbLTTF" 00:20:22.188 } 00:20:22.188 }, 00:20:22.188 { 00:20:22.188 "method": "nvmf_subsystem_add_ns", 00:20:22.188 "params": { 00:20:22.188 "namespace": { 00:20:22.188 "bdev_name": "malloc0", 00:20:22.188 "nguid": "B98602D58E7B402FB2A5A4407611E6D2", 00:20:22.188 "no_auto_visible": false, 00:20:22.188 "nsid": 1, 00:20:22.188 "uuid": "b98602d5-8e7b-402f-b2a5-a4407611e6d2" 00:20:22.188 }, 00:20:22.188 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:20:22.188 } 00:20:22.188 }, 00:20:22.188 { 00:20:22.188 "method": "nvmf_subsystem_add_listener", 00:20:22.188 "params": { 00:20:22.188 "listen_address": { 00:20:22.189 "adrfam": "IPv4", 00:20:22.189 "traddr": "10.0.0.2", 00:20:22.189 "trsvcid": "4420", 00:20:22.189 "trtype": "TCP" 00:20:22.189 }, 00:20:22.189 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:22.189 "secure_channel": true 00:20:22.189 } 00:20:22.189 } 00:20:22.189 ] 00:20:22.189 } 00:20:22.189 ] 00:20:22.189 }' 00:20:22.189 20:20:10 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:22.448 20:20:11 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:20:22.448 "subsystems": [ 00:20:22.448 { 00:20:22.448 "subsystem": "keyring", 00:20:22.448 "config": [] 00:20:22.448 }, 00:20:22.448 { 00:20:22.448 "subsystem": "iobuf", 00:20:22.448 "config": [ 00:20:22.448 { 00:20:22.448 "method": "iobuf_set_options", 00:20:22.448 "params": { 00:20:22.448 "large_bufsize": 135168, 00:20:22.448 "large_pool_count": 1024, 00:20:22.448 "small_bufsize": 8192, 00:20:22.448 "small_pool_count": 8192 00:20:22.448 } 00:20:22.448 } 00:20:22.448 ] 00:20:22.448 }, 00:20:22.448 { 00:20:22.448 "subsystem": "sock", 00:20:22.448 "config": [ 00:20:22.448 { 00:20:22.448 "method": "sock_set_default_impl", 00:20:22.448 "params": { 00:20:22.448 "impl_name": "posix" 00:20:22.448 } 00:20:22.448 }, 00:20:22.448 { 00:20:22.448 "method": "sock_impl_set_options", 00:20:22.448 "params": { 00:20:22.448 "enable_ktls": false, 00:20:22.448 "enable_placement_id": 0, 00:20:22.448 "enable_quickack": false, 00:20:22.448 "enable_recv_pipe": true, 00:20:22.448 "enable_zerocopy_send_client": false, 00:20:22.448 "enable_zerocopy_send_server": true, 00:20:22.448 "impl_name": "ssl", 00:20:22.448 "recv_buf_size": 4096, 00:20:22.448 "send_buf_size": 4096, 00:20:22.448 "tls_version": 0, 00:20:22.448 "zerocopy_threshold": 0 00:20:22.448 } 00:20:22.448 }, 00:20:22.448 { 00:20:22.448 "method": "sock_impl_set_options", 00:20:22.448 "params": { 00:20:22.448 "enable_ktls": false, 00:20:22.448 "enable_placement_id": 0, 00:20:22.448 "enable_quickack": false, 00:20:22.448 "enable_recv_pipe": true, 00:20:22.448 "enable_zerocopy_send_client": false, 00:20:22.448 "enable_zerocopy_send_server": true, 00:20:22.448 "impl_name": "posix", 00:20:22.448 "recv_buf_size": 2097152, 00:20:22.448 "send_buf_size": 2097152, 00:20:22.448 "tls_version": 0, 00:20:22.448 "zerocopy_threshold": 0 00:20:22.448 } 00:20:22.448 } 00:20:22.448 ] 00:20:22.448 }, 00:20:22.448 { 00:20:22.448 "subsystem": "vmd", 00:20:22.448 "config": [] 00:20:22.448 }, 00:20:22.448 { 00:20:22.448 "subsystem": "accel", 00:20:22.448 "config": [ 00:20:22.448 { 00:20:22.448 "method": "accel_set_options", 00:20:22.448 "params": { 00:20:22.448 "buf_count": 2048, 00:20:22.448 "large_cache_size": 16, 00:20:22.448 "sequence_count": 2048, 00:20:22.448 "small_cache_size": 128, 00:20:22.448 "task_count": 2048 00:20:22.448 } 00:20:22.448 } 00:20:22.448 ] 00:20:22.448 }, 00:20:22.448 { 00:20:22.448 "subsystem": "bdev", 00:20:22.448 "config": [ 00:20:22.448 { 00:20:22.448 "method": "bdev_set_options", 00:20:22.448 "params": { 00:20:22.448 "bdev_auto_examine": true, 00:20:22.448 "bdev_io_cache_size": 256, 00:20:22.448 "bdev_io_pool_size": 65535, 00:20:22.448 "iobuf_large_cache_size": 16, 00:20:22.448 "iobuf_small_cache_size": 128 00:20:22.448 } 00:20:22.448 }, 00:20:22.448 { 00:20:22.448 "method": "bdev_raid_set_options", 00:20:22.448 "params": { 00:20:22.448 "process_window_size_kb": 1024 00:20:22.448 } 00:20:22.448 }, 00:20:22.448 { 00:20:22.448 "method": "bdev_iscsi_set_options", 00:20:22.448 "params": { 00:20:22.448 "timeout_sec": 30 00:20:22.448 } 00:20:22.448 }, 00:20:22.448 { 00:20:22.448 "method": "bdev_nvme_set_options", 00:20:22.448 "params": { 00:20:22.448 "action_on_timeout": "none", 00:20:22.448 "allow_accel_sequence": false, 00:20:22.448 "arbitration_burst": 0, 00:20:22.448 "bdev_retry_count": 3, 00:20:22.448 "ctrlr_loss_timeout_sec": 0, 00:20:22.448 "delay_cmd_submit": true, 00:20:22.448 "dhchap_dhgroups": [ 00:20:22.448 "null", 00:20:22.448 "ffdhe2048", 00:20:22.448 "ffdhe3072", 00:20:22.448 "ffdhe4096", 00:20:22.448 "ffdhe6144", 00:20:22.448 "ffdhe8192" 00:20:22.448 ], 00:20:22.448 "dhchap_digests": [ 00:20:22.448 "sha256", 00:20:22.448 "sha384", 00:20:22.448 "sha512" 00:20:22.448 ], 00:20:22.448 "disable_auto_failback": false, 00:20:22.448 "fast_io_fail_timeout_sec": 0, 00:20:22.448 "generate_uuids": false, 00:20:22.448 "high_priority_weight": 0, 00:20:22.448 "io_path_stat": false, 00:20:22.448 "io_queue_requests": 512, 00:20:22.448 "keep_alive_timeout_ms": 10000, 00:20:22.448 "low_priority_weight": 0, 00:20:22.448 "medium_priority_weight": 0, 00:20:22.448 "nvme_adminq_poll_period_us": 10000, 00:20:22.448 "nvme_error_stat": false, 00:20:22.448 "nvme_ioq_poll_period_us": 0, 00:20:22.448 "rdma_cm_event_timeout_ms": 0, 00:20:22.448 "rdma_max_cq_size": 0, 00:20:22.448 "rdma_srq_size": 0, 00:20:22.448 "reconnect_delay_sec": 0, 00:20:22.448 "timeout_admin_us": 0, 00:20:22.448 "timeout_us": 0, 00:20:22.448 "transport_ack_timeout": 0, 00:20:22.448 "transport_retry_count": 4, 00:20:22.448 "transport_tos": 0 00:20:22.448 } 00:20:22.448 }, 00:20:22.448 { 00:20:22.448 "method": "bdev_nvme_attach_controller", 00:20:22.448 "params": { 00:20:22.448 "adrfam": "IPv4", 00:20:22.448 "ctrlr_loss_timeout_sec": 0, 00:20:22.448 "ddgst": false, 00:20:22.448 "fast_io_fail_timeout_sec": 0, 00:20:22.448 "hdgst": false, 00:20:22.448 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:22.448 "name": "TLSTEST", 00:20:22.448 "prchk_guard": false, 00:20:22.448 "prchk_reftag": false, 00:20:22.448 "psk": "/tmp/tmp.GktrxbLTTF", 00:20:22.448 "reconnect_delay_sec": 0, 00:20:22.448 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:22.448 "traddr": "10.0.0.2", 00:20:22.448 "trsvcid": "4420", 00:20:22.448 "trtype": "TCP" 00:20:22.448 } 00:20:22.448 }, 00:20:22.448 { 00:20:22.448 "method": "bdev_nvme_set_hotplug", 00:20:22.448 "params": { 00:20:22.448 "enable": false, 00:20:22.448 "period_us": 100000 00:20:22.448 } 00:20:22.448 }, 00:20:22.448 { 00:20:22.448 "method": "bdev_wait_for_examine" 00:20:22.448 } 00:20:22.448 ] 00:20:22.448 }, 00:20:22.448 { 00:20:22.448 "subsystem": "nbd", 00:20:22.448 "config": [] 00:20:22.448 } 00:20:22.448 ] 00:20:22.448 }' 00:20:22.448 20:20:11 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 100271 00:20:22.448 20:20:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 100271 ']' 00:20:22.448 20:20:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 100271 00:20:22.448 20:20:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:22.448 20:20:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:22.448 20:20:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 100271 00:20:22.449 20:20:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:20:22.449 20:20:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:20:22.449 killing process with pid 100271 00:20:22.449 20:20:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 100271' 00:20:22.449 20:20:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 100271 00:20:22.449 Received shutdown signal, test time was about 10.000000 seconds 00:20:22.449 00:20:22.449 Latency(us) 00:20:22.449 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:22.449 =================================================================================================================== 00:20:22.449 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:22.449 [2024-07-14 20:20:11.314369] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:22.449 20:20:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 100271 00:20:22.449 20:20:11 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 100169 00:20:22.449 20:20:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 100169 ']' 00:20:22.449 20:20:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 100169 00:20:22.449 20:20:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:22.449 20:20:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:22.449 20:20:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 100169 00:20:22.708 20:20:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:20:22.708 20:20:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:20:22.708 killing process with pid 100169 00:20:22.708 20:20:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 100169' 00:20:22.708 20:20:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 100169 00:20:22.708 [2024-07-14 20:20:11.543871] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:22.708 20:20:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 100169 00:20:22.967 20:20:11 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:20:22.967 20:20:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:22.967 20:20:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:22.967 20:20:11 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:20:22.967 "subsystems": [ 00:20:22.967 { 00:20:22.967 "subsystem": "keyring", 00:20:22.967 "config": [] 00:20:22.967 }, 00:20:22.967 { 00:20:22.967 "subsystem": "iobuf", 00:20:22.967 "config": [ 00:20:22.967 { 00:20:22.967 "method": "iobuf_set_options", 00:20:22.967 "params": { 00:20:22.967 "large_bufsize": 135168, 00:20:22.967 "large_pool_count": 1024, 00:20:22.967 "small_bufsize": 8192, 00:20:22.967 "small_pool_count": 8192 00:20:22.967 } 00:20:22.967 } 00:20:22.967 ] 00:20:22.967 }, 00:20:22.967 { 00:20:22.967 "subsystem": "sock", 00:20:22.967 "config": [ 00:20:22.967 { 00:20:22.967 "method": "sock_set_default_impl", 00:20:22.967 "params": { 00:20:22.967 "impl_name": "posix" 00:20:22.967 } 00:20:22.967 }, 00:20:22.967 { 00:20:22.967 "method": "sock_impl_set_options", 00:20:22.967 "params": { 00:20:22.967 "enable_ktls": false, 00:20:22.967 "enable_placement_id": 0, 00:20:22.967 "enable_quickack": false, 00:20:22.967 "enable_recv_pipe": true, 00:20:22.967 "enable_zerocopy_send_client": false, 00:20:22.967 "enable_zerocopy_send_server": true, 00:20:22.967 "impl_name": "ssl", 00:20:22.967 "recv_buf_size": 4096, 00:20:22.967 "send_buf_size": 4096, 00:20:22.967 "tls_version": 0, 00:20:22.967 "zerocopy_threshold": 0 00:20:22.967 } 00:20:22.967 }, 00:20:22.967 { 00:20:22.967 "method": "sock_impl_set_options", 00:20:22.967 "params": { 00:20:22.967 "enable_ktls": false, 00:20:22.967 "enable_placement_id": 0, 00:20:22.967 "enable_quickack": false, 00:20:22.967 "enable_recv_pipe": true, 00:20:22.967 "enable_zerocopy_send_client": false, 00:20:22.967 "enable_zerocopy_send_server": true, 00:20:22.967 "impl_name": "posix", 00:20:22.967 "recv_buf_size": 2097152, 00:20:22.967 "send_buf_size": 2097152, 00:20:22.967 "tls_version": 0, 00:20:22.967 "zerocopy_threshold": 0 00:20:22.967 } 00:20:22.967 } 00:20:22.967 ] 00:20:22.967 }, 00:20:22.967 { 00:20:22.967 "subsystem": "vmd", 00:20:22.967 "config": [] 00:20:22.967 }, 00:20:22.967 { 00:20:22.967 "subsystem": "accel", 00:20:22.967 "config": [ 00:20:22.967 { 00:20:22.967 "method": "accel_set_options", 00:20:22.967 "params": { 00:20:22.967 "buf_count": 2048, 00:20:22.967 "large_cache_size": 16, 00:20:22.968 "sequence_count": 2048, 00:20:22.968 "small_cache_size": 128, 00:20:22.968 "task_count": 2048 00:20:22.968 } 00:20:22.968 } 00:20:22.968 ] 00:20:22.968 }, 00:20:22.968 { 00:20:22.968 "subsystem": "bdev", 00:20:22.968 "config": [ 00:20:22.968 { 00:20:22.968 "method": "bdev_set_options", 00:20:22.968 "params": { 00:20:22.968 "bdev_auto_examine": true, 00:20:22.968 "bdev_io_cache_size": 256, 00:20:22.968 "bdev_io_pool_size": 65535, 00:20:22.968 "iobuf_large_cache_size": 16, 00:20:22.968 "iobuf_small_cache_size": 128 00:20:22.968 } 00:20:22.968 }, 00:20:22.968 { 00:20:22.968 "method": "bdev_raid_set_options", 00:20:22.968 "params": { 00:20:22.968 "process_window_size_kb": 1024 00:20:22.968 } 00:20:22.968 }, 00:20:22.968 { 00:20:22.968 "method": "bdev_iscsi_set_options", 00:20:22.968 "params": { 00:20:22.968 "timeout_sec": 30 00:20:22.968 } 00:20:22.968 }, 00:20:22.968 { 00:20:22.968 "method": "bdev_nvme_set_options", 00:20:22.968 "params": { 00:20:22.968 "action_on_timeout": "none", 00:20:22.968 "allow_accel_sequence": false, 00:20:22.968 "arbitration_burst": 0, 00:20:22.968 "bdev_retry_count": 3, 00:20:22.968 "ctrlr_loss_timeout_sec": 0, 00:20:22.968 "delay_cmd_submit": true, 00:20:22.968 "dhchap_dhgroups": [ 00:20:22.968 "null", 00:20:22.968 "ffdhe2048", 00:20:22.968 "ffdhe3072", 00:20:22.968 "ffdhe4096", 00:20:22.968 "ffdhe6144", 00:20:22.968 "ffdhe8192" 00:20:22.968 ], 00:20:22.968 "dhchap_digests": [ 00:20:22.968 "sha256", 00:20:22.968 "sha384", 00:20:22.968 "sha512" 00:20:22.968 ], 00:20:22.968 "disable_auto_failback": false, 00:20:22.968 "fast_io_fail_timeout_sec": 0, 00:20:22.968 "generate_uuids": false, 00:20:22.968 "high_priority_weight": 0, 00:20:22.968 "io_path_stat": false, 00:20:22.968 "io_queue_requests": 0, 00:20:22.968 "keep_alive_timeout_ms": 10000, 00:20:22.968 "low_priority_weight": 0, 00:20:22.968 "medium_priority_weight": 0, 00:20:22.968 "nvme_adminq_poll_period_us": 10000, 00:20:22.968 "nvme_error_stat": false, 00:20:22.968 "nvme_ioq_poll_period_us": 0, 00:20:22.968 "rdma_cm_event_timeout_ms": 0, 00:20:22.968 "rdma_max_cq_size": 0, 00:20:22.968 "rdma_srq_size": 0, 00:20:22.968 "reconnect_delay_sec": 0, 00:20:22.968 "timeout_admin_us": 0, 00:20:22.968 "timeout_us": 0, 00:20:22.968 "transport_ack_timeout": 0, 00:20:22.968 "transport_retry_count": 4, 00:20:22.968 "transport_tos": 0 00:20:22.968 } 00:20:22.968 }, 00:20:22.968 { 00:20:22.968 "method": "bdev_nvme_set_hotplug", 00:20:22.968 "params": { 00:20:22.968 "enable": false, 00:20:22.968 "period_us": 100000 00:20:22.968 } 00:20:22.968 }, 00:20:22.968 { 00:20:22.968 "method": "bdev_malloc_create", 00:20:22.968 "params": { 00:20:22.968 "block_size": 4096, 00:20:22.968 "name": "malloc0", 00:20:22.968 "num_blocks": 8192, 00:20:22.968 "optimal_io_boundary": 0, 00:20:22.968 "physical_block_size": 4096, 00:20:22.968 "uuid": "b98602d5-8e7b-402f-b2a5-a4407611e6d2" 00:20:22.968 } 00:20:22.968 }, 00:20:22.968 { 00:20:22.968 "method": "bdev_wait_for_examine" 00:20:22.968 } 00:20:22.968 ] 00:20:22.968 }, 00:20:22.968 { 00:20:22.968 "subsystem": "nbd", 00:20:22.968 "config": [] 00:20:22.968 }, 00:20:22.968 { 00:20:22.968 "subsystem": "scheduler", 00:20:22.968 "config": [ 00:20:22.968 { 00:20:22.968 "method": "framework_set_scheduler", 00:20:22.968 "params": { 00:20:22.968 "name": "static" 00:20:22.968 } 00:20:22.968 } 00:20:22.968 ] 00:20:22.968 }, 00:20:22.968 { 00:20:22.968 "subsystem": "nvmf", 00:20:22.968 "config": [ 00:20:22.968 { 00:20:22.968 "method": "nvmf_set_config", 00:20:22.968 "params": { 00:20:22.968 "admin_cmd_passthru": { 00:20:22.968 "identify_ctrlr": false 00:20:22.968 }, 00:20:22.968 "discovery_filter": "match_any" 00:20:22.968 } 00:20:22.968 }, 00:20:22.968 { 00:20:22.968 "method": "nvmf_set_max_subsystems", 00:20:22.968 "params": { 00:20:22.968 "max_subsystems": 1024 00:20:22.968 } 00:20:22.968 }, 00:20:22.968 { 00:20:22.968 "method": "nvmf_set_crdt", 00:20:22.968 "params": { 00:20:22.968 "crdt1": 0, 00:20:22.968 "crdt2": 0, 00:20:22.968 "crdt3": 0 00:20:22.968 } 00:20:22.968 }, 00:20:22.968 { 00:20:22.968 "method": "nvmf_create_transport", 00:20:22.968 "params": { 00:20:22.968 "abort_timeout_sec": 1, 00:20:22.968 "ack_timeout": 0, 00:20:22.968 "buf_cache_size": 4294967295, 00:20:22.968 "c2h_success": false, 00:20:22.968 "data_wr_pool_size": 0, 00:20:22.968 "dif_insert_or_strip": false, 00:20:22.968 "in_capsule_data_size": 4096, 00:20:22.968 "io_unit_size": 131072, 00:20:22.968 "max_aq_depth": 128, 00:20:22.968 "max_io_qpairs_per_ctrlr": 127, 00:20:22.968 "max_io_size": 131072, 00:20:22.968 "max_queue_depth": 128, 00:20:22.968 "num_shared_buffers": 511, 00:20:22.968 "sock_priority": 0, 00:20:22.968 "trtype": "TCP", 00:20:22.968 "zcopy": false 00:20:22.968 } 00:20:22.968 }, 00:20:22.968 { 00:20:22.968 "method": "nvmf_create_subsystem", 00:20:22.968 "params": { 00:20:22.968 "allow_any_host": false, 00:20:22.968 "ana_reporting": false, 00:20:22.968 "max_cntlid": 65519, 00:20:22.968 "max_namespaces": 10, 00:20:22.968 "min_cntlid": 1, 00:20:22.968 "model_number": "SPDK bdev Controller", 00:20:22.968 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:22.968 "serial_number": "SPDK00000000000001" 00:20:22.968 } 00:20:22.968 }, 00:20:22.968 { 00:20:22.968 "method": "nvmf_subsystem_add_host", 00:20:22.968 "params": { 00:20:22.968 "host": "nqn.2016-06.io.spdk:host1", 00:20:22.968 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:22.968 "psk": "/tmp/tmp.GktrxbLTTF" 00:20:22.968 } 00:20:22.968 }, 00:20:22.968 { 00:20:22.968 "method": "nvmf_subsystem_add_ns", 00:20:22.968 "params": { 00:20:22.968 "namespace": { 00:20:22.968 "bdev_name": "malloc0", 00:20:22.968 "nguid": "B98602D58E7B402FB2A5A4407611E6D2", 00:20:22.968 "no_auto_visible": false, 00:20:22.968 "nsid": 1, 00:20:22.968 "uuid": "b98602d5-8e7b-402f-b2a5-a4407611e6d2" 00:20:22.968 }, 00:20:22.968 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:20:22.968 } 00:20:22.968 }, 00:20:22.968 { 00:20:22.968 "method": "nvmf_subsystem_add_listener", 00:20:22.968 "params": { 00:20:22.968 "listen_address": { 00:20:22.968 "adrfam": "IPv4", 00:20:22.968 "traddr": "10.0.0.2", 00:20:22.968 "trsvcid": "4420", 00:20:22.968 "trtype": "TCP" 00:20:22.968 }, 00:20:22.968 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:22.968 "secure_channel": true 00:20:22.968 } 00:20:22.968 } 00:20:22.968 ] 00:20:22.968 } 00:20:22.968 ] 00:20:22.968 }' 00:20:22.968 20:20:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:22.968 20:20:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=100344 00:20:22.968 20:20:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:20:22.968 20:20:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 100344 00:20:22.968 20:20:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 100344 ']' 00:20:22.968 20:20:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:22.968 20:20:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:22.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:22.968 20:20:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:22.968 20:20:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:22.968 20:20:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:22.968 [2024-07-14 20:20:11.919827] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:20:22.968 [2024-07-14 20:20:11.919951] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:23.227 [2024-07-14 20:20:12.060317] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:23.227 [2024-07-14 20:20:12.161828] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:23.227 [2024-07-14 20:20:12.161903] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:23.227 [2024-07-14 20:20:12.161930] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:23.227 [2024-07-14 20:20:12.161938] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:23.227 [2024-07-14 20:20:12.161945] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:23.227 [2024-07-14 20:20:12.162036] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:23.486 [2024-07-14 20:20:12.421272] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:23.486 [2024-07-14 20:20:12.437195] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:23.486 [2024-07-14 20:20:12.453206] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:23.486 [2024-07-14 20:20:12.453486] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:24.055 20:20:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:24.055 20:20:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:24.055 20:20:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:24.055 20:20:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:24.055 20:20:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:24.055 20:20:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:24.055 20:20:12 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=100388 00:20:24.055 20:20:12 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 100388 /var/tmp/bdevperf.sock 00:20:24.055 20:20:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 100388 ']' 00:20:24.055 20:20:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:24.055 20:20:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:24.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:24.055 20:20:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:24.055 20:20:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:24.055 20:20:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:24.055 20:20:12 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:20:24.055 20:20:12 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:20:24.055 "subsystems": [ 00:20:24.055 { 00:20:24.055 "subsystem": "keyring", 00:20:24.055 "config": [] 00:20:24.055 }, 00:20:24.055 { 00:20:24.055 "subsystem": "iobuf", 00:20:24.055 "config": [ 00:20:24.055 { 00:20:24.055 "method": "iobuf_set_options", 00:20:24.055 "params": { 00:20:24.055 "large_bufsize": 135168, 00:20:24.055 "large_pool_count": 1024, 00:20:24.055 "small_bufsize": 8192, 00:20:24.055 "small_pool_count": 8192 00:20:24.055 } 00:20:24.055 } 00:20:24.055 ] 00:20:24.055 }, 00:20:24.055 { 00:20:24.055 "subsystem": "sock", 00:20:24.055 "config": [ 00:20:24.055 { 00:20:24.055 "method": "sock_set_default_impl", 00:20:24.055 "params": { 00:20:24.055 "impl_name": "posix" 00:20:24.055 } 00:20:24.055 }, 00:20:24.055 { 00:20:24.055 "method": "sock_impl_set_options", 00:20:24.055 "params": { 00:20:24.055 "enable_ktls": false, 00:20:24.055 "enable_placement_id": 0, 00:20:24.055 "enable_quickack": false, 00:20:24.055 "enable_recv_pipe": true, 00:20:24.055 "enable_zerocopy_send_client": false, 00:20:24.055 "enable_zerocopy_send_server": true, 00:20:24.055 "impl_name": "ssl", 00:20:24.055 "recv_buf_size": 4096, 00:20:24.055 "send_buf_size": 4096, 00:20:24.055 "tls_version": 0, 00:20:24.055 "zerocopy_threshold": 0 00:20:24.055 } 00:20:24.055 }, 00:20:24.055 { 00:20:24.055 "method": "sock_impl_set_options", 00:20:24.055 "params": { 00:20:24.055 "enable_ktls": false, 00:20:24.055 "enable_placement_id": 0, 00:20:24.055 "enable_quickack": false, 00:20:24.055 "enable_recv_pipe": true, 00:20:24.055 "enable_zerocopy_send_client": false, 00:20:24.055 "enable_zerocopy_send_server": true, 00:20:24.055 "impl_name": "posix", 00:20:24.055 "recv_buf_size": 2097152, 00:20:24.055 "send_buf_size": 2097152, 00:20:24.055 "tls_version": 0, 00:20:24.055 "zerocopy_threshold": 0 00:20:24.055 } 00:20:24.055 } 00:20:24.055 ] 00:20:24.055 }, 00:20:24.055 { 00:20:24.055 "subsystem": "vmd", 00:20:24.055 "config": [] 00:20:24.055 }, 00:20:24.055 { 00:20:24.055 "subsystem": "accel", 00:20:24.055 "config": [ 00:20:24.055 { 00:20:24.055 "method": "accel_set_options", 00:20:24.055 "params": { 00:20:24.055 "buf_count": 2048, 00:20:24.055 "large_cache_size": 16, 00:20:24.055 "sequence_count": 2048, 00:20:24.055 "small_cache_size": 128, 00:20:24.055 "task_count": 2048 00:20:24.056 } 00:20:24.056 } 00:20:24.056 ] 00:20:24.056 }, 00:20:24.056 { 00:20:24.056 "subsystem": "bdev", 00:20:24.056 "config": [ 00:20:24.056 { 00:20:24.056 "method": "bdev_set_options", 00:20:24.056 "params": { 00:20:24.056 "bdev_auto_examine": true, 00:20:24.056 "bdev_io_cache_size": 256, 00:20:24.056 "bdev_io_pool_size": 65535, 00:20:24.056 "iobuf_large_cache_size": 16, 00:20:24.056 "iobuf_small_cache_size": 128 00:20:24.056 } 00:20:24.056 }, 00:20:24.056 { 00:20:24.056 "method": "bdev_raid_set_options", 00:20:24.056 "params": { 00:20:24.056 "process_window_size_kb": 1024 00:20:24.056 } 00:20:24.056 }, 00:20:24.056 { 00:20:24.056 "method": "bdev_iscsi_set_options", 00:20:24.056 "params": { 00:20:24.056 "timeout_sec": 30 00:20:24.056 } 00:20:24.056 }, 00:20:24.056 { 00:20:24.056 "method": "bdev_nvme_set_options", 00:20:24.056 "params": { 00:20:24.056 "action_on_timeout": "none", 00:20:24.056 "allow_accel_sequence": false, 00:20:24.056 "arbitration_burst": 0, 00:20:24.056 "bdev_retry_count": 3, 00:20:24.056 "ctrlr_loss_timeout_sec": 0, 00:20:24.056 "delay_cmd_submit": true, 00:20:24.056 "dhchap_dhgroups": [ 00:20:24.056 "null", 00:20:24.056 "ffdhe2048", 00:20:24.056 "ffdhe3072", 00:20:24.056 "ffdhe4096", 00:20:24.056 "ffdhe6144", 00:20:24.056 "ffdhe8192" 00:20:24.056 ], 00:20:24.056 "dhchap_digests": [ 00:20:24.056 "sha256", 00:20:24.056 "sha384", 00:20:24.056 "sha512" 00:20:24.056 ], 00:20:24.056 "disable_auto_failback": false, 00:20:24.056 "fast_io_fail_timeout_sec": 0, 00:20:24.056 "generate_uuids": false, 00:20:24.056 "high_priority_weight": 0, 00:20:24.056 "io_path_stat": false, 00:20:24.056 "io_queue_requests": 512, 00:20:24.056 "keep_alive_timeout_ms": 10000, 00:20:24.056 "low_priority_weight": 0, 00:20:24.056 "medium_priority_weight": 0, 00:20:24.056 "nvme_adminq_poll_period_us": 10000, 00:20:24.056 "nvme_error_stat": false, 00:20:24.056 "nvme_ioq_poll_period_us": 0, 00:20:24.056 "rdma_cm_event_timeout_ms": 0, 00:20:24.056 "rdma_max_cq_size": 0, 00:20:24.056 "rdma_srq_size": 0, 00:20:24.056 "reconnect_delay_sec": 0, 00:20:24.056 "timeout_admin_us": 0, 00:20:24.056 "timeout_us": 0, 00:20:24.056 "transport_ack_timeout": 0, 00:20:24.056 "transport_retry_count": 4, 00:20:24.056 "transport_tos": 0 00:20:24.056 } 00:20:24.056 }, 00:20:24.056 { 00:20:24.056 "method": "bdev_nvme_attach_controller", 00:20:24.056 "params": { 00:20:24.056 "adrfam": "IPv4", 00:20:24.056 "ctrlr_loss_timeout_sec": 0, 00:20:24.056 "ddgst": false, 00:20:24.056 "fast_io_fail_timeout_sec": 0, 00:20:24.056 "hdgst": false, 00:20:24.056 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:24.056 "name": "TLSTEST", 00:20:24.056 "prchk_guard": false, 00:20:24.056 "prchk_reftag": false, 00:20:24.056 "psk": "/tmp/tmp.GktrxbLTTF", 00:20:24.056 "reconnect_delay_sec": 0, 00:20:24.056 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:24.056 "traddr": "10.0.0.2", 00:20:24.056 "trsvcid": "4420", 00:20:24.056 "trtype": "TCP" 00:20:24.056 } 00:20:24.056 }, 00:20:24.056 { 00:20:24.056 "method": "bdev_nvme_set_hotplug", 00:20:24.056 "params": { 00:20:24.056 "enable": false, 00:20:24.056 "period_us": 100000 00:20:24.056 } 00:20:24.056 }, 00:20:24.056 { 00:20:24.056 "method": "bdev_wait_for_examine" 00:20:24.056 } 00:20:24.056 ] 00:20:24.056 }, 00:20:24.056 { 00:20:24.056 "subsystem": "nbd", 00:20:24.056 "config": [] 00:20:24.056 } 00:20:24.056 ] 00:20:24.056 }' 00:20:24.056 [2024-07-14 20:20:12.961786] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:20:24.056 [2024-07-14 20:20:12.962477] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100388 ] 00:20:24.056 [2024-07-14 20:20:13.106647] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:24.315 [2024-07-14 20:20:13.200526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:24.315 [2024-07-14 20:20:13.367604] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:24.315 [2024-07-14 20:20:13.367733] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:24.880 20:20:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:24.880 20:20:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:24.880 20:20:13 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:25.138 Running I/O for 10 seconds... 00:20:35.136 00:20:35.136 Latency(us) 00:20:35.136 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:35.136 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:35.136 Verification LBA range: start 0x0 length 0x2000 00:20:35.136 TLSTESTn1 : 10.01 3703.07 14.47 0.00 0.00 34513.73 4766.25 28597.53 00:20:35.136 =================================================================================================================== 00:20:35.136 Total : 3703.07 14.47 0.00 0.00 34513.73 4766.25 28597.53 00:20:35.136 0 00:20:35.136 20:20:23 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:35.136 20:20:23 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 100388 00:20:35.136 20:20:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 100388 ']' 00:20:35.136 20:20:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 100388 00:20:35.136 20:20:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:35.136 20:20:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:35.136 20:20:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 100388 00:20:35.136 20:20:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:20:35.136 20:20:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:20:35.136 killing process with pid 100388 00:20:35.136 20:20:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 100388' 00:20:35.136 20:20:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 100388 00:20:35.136 Received shutdown signal, test time was about 10.000000 seconds 00:20:35.136 00:20:35.136 Latency(us) 00:20:35.136 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:35.136 =================================================================================================================== 00:20:35.136 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:35.136 [2024-07-14 20:20:24.026289] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:35.136 20:20:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 100388 00:20:35.394 20:20:24 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 100344 00:20:35.394 20:20:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 100344 ']' 00:20:35.394 20:20:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 100344 00:20:35.394 20:20:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:35.394 20:20:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:35.394 20:20:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 100344 00:20:35.394 20:20:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:20:35.394 20:20:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:20:35.394 killing process with pid 100344 00:20:35.394 20:20:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 100344' 00:20:35.394 20:20:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 100344 00:20:35.394 [2024-07-14 20:20:24.265096] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:35.394 20:20:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 100344 00:20:35.653 20:20:24 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:20:35.653 20:20:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:35.653 20:20:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:35.653 20:20:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:35.653 20:20:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=100538 00:20:35.653 20:20:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:35.653 20:20:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 100538 00:20:35.653 20:20:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 100538 ']' 00:20:35.653 20:20:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:35.653 20:20:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:35.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:35.653 20:20:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:35.653 20:20:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:35.653 20:20:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:35.653 [2024-07-14 20:20:24.643632] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:20:35.653 [2024-07-14 20:20:24.643733] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:35.910 [2024-07-14 20:20:24.779459] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:35.910 [2024-07-14 20:20:24.896744] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:35.910 [2024-07-14 20:20:24.897145] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:35.910 [2024-07-14 20:20:24.897166] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:35.910 [2024-07-14 20:20:24.897175] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:35.910 [2024-07-14 20:20:24.897183] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:35.910 [2024-07-14 20:20:24.897210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:36.476 20:20:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:36.476 20:20:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:36.476 20:20:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:36.476 20:20:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:36.476 20:20:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:36.734 20:20:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:36.734 20:20:25 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.GktrxbLTTF 00:20:36.734 20:20:25 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.GktrxbLTTF 00:20:36.734 20:20:25 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:36.992 [2024-07-14 20:20:25.871899] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:36.993 20:20:25 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:37.251 20:20:26 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:37.509 [2024-07-14 20:20:26.423993] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:37.509 [2024-07-14 20:20:26.424302] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:37.509 20:20:26 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:37.768 malloc0 00:20:37.768 20:20:26 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:38.026 20:20:26 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.GktrxbLTTF 00:20:38.285 [2024-07-14 20:20:27.186644] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:38.285 20:20:27 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:38.285 20:20:27 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=100636 00:20:38.285 20:20:27 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:38.285 20:20:27 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 100636 /var/tmp/bdevperf.sock 00:20:38.285 20:20:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 100636 ']' 00:20:38.285 20:20:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:38.285 20:20:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:38.285 20:20:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:38.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:38.285 20:20:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:38.285 20:20:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:38.285 [2024-07-14 20:20:27.249323] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:20:38.285 [2024-07-14 20:20:27.249393] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100636 ] 00:20:38.544 [2024-07-14 20:20:27.380814] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:38.544 [2024-07-14 20:20:27.484933] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:39.480 20:20:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:39.480 20:20:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:39.480 20:20:28 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.GktrxbLTTF 00:20:39.480 20:20:28 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:39.739 [2024-07-14 20:20:28.662712] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:39.739 nvme0n1 00:20:39.739 20:20:28 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:39.997 Running I/O for 1 seconds... 00:20:40.933 00:20:40.933 Latency(us) 00:20:40.933 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:40.933 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:40.933 Verification LBA range: start 0x0 length 0x2000 00:20:40.933 nvme0n1 : 1.02 4417.21 17.25 0.00 0.00 28672.19 733.56 17277.67 00:20:40.933 =================================================================================================================== 00:20:40.933 Total : 4417.21 17.25 0.00 0.00 28672.19 733.56 17277.67 00:20:40.933 0 00:20:40.933 20:20:29 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 100636 00:20:40.933 20:20:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 100636 ']' 00:20:40.933 20:20:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 100636 00:20:40.933 20:20:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:40.933 20:20:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:40.933 20:20:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 100636 00:20:40.933 20:20:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:20:40.933 20:20:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:20:40.933 killing process with pid 100636 00:20:40.933 20:20:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 100636' 00:20:40.933 20:20:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 100636 00:20:40.933 Received shutdown signal, test time was about 1.000000 seconds 00:20:40.933 00:20:40.933 Latency(us) 00:20:40.934 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:40.934 =================================================================================================================== 00:20:40.934 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:40.934 20:20:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 100636 00:20:41.202 20:20:30 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 100538 00:20:41.202 20:20:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 100538 ']' 00:20:41.202 20:20:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 100538 00:20:41.202 20:20:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:41.202 20:20:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:41.202 20:20:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 100538 00:20:41.202 20:20:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:41.202 20:20:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:41.202 killing process with pid 100538 00:20:41.202 20:20:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 100538' 00:20:41.202 20:20:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 100538 00:20:41.202 [2024-07-14 20:20:30.254701] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:41.202 20:20:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 100538 00:20:41.769 20:20:30 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:20:41.769 20:20:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:41.769 20:20:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:41.770 20:20:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:41.770 20:20:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=100713 00:20:41.770 20:20:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 100713 00:20:41.770 20:20:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:41.770 20:20:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 100713 ']' 00:20:41.770 20:20:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:41.770 20:20:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:41.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:41.770 20:20:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:41.770 20:20:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:41.770 20:20:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:41.770 [2024-07-14 20:20:30.641479] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:20:41.770 [2024-07-14 20:20:30.641593] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:41.770 [2024-07-14 20:20:30.783459] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:42.029 [2024-07-14 20:20:30.894304] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:42.029 [2024-07-14 20:20:30.894362] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:42.029 [2024-07-14 20:20:30.894373] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:42.029 [2024-07-14 20:20:30.894380] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:42.029 [2024-07-14 20:20:30.894387] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:42.029 [2024-07-14 20:20:30.894411] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:42.596 20:20:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:42.596 20:20:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:42.596 20:20:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:42.596 20:20:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:42.596 20:20:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:42.596 20:20:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:42.596 20:20:31 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:20:42.596 20:20:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.596 20:20:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:42.596 [2024-07-14 20:20:31.648030] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:42.596 malloc0 00:20:42.855 [2024-07-14 20:20:31.682394] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:42.855 [2024-07-14 20:20:31.682702] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:42.855 20:20:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.855 20:20:31 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=100763 00:20:42.855 20:20:31 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:42.855 20:20:31 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 100763 /var/tmp/bdevperf.sock 00:20:42.855 20:20:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 100763 ']' 00:20:42.855 20:20:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:42.855 20:20:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:42.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:42.855 20:20:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:42.855 20:20:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:42.855 20:20:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:42.855 [2024-07-14 20:20:31.768592] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:20:42.855 [2024-07-14 20:20:31.768709] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100763 ] 00:20:42.855 [2024-07-14 20:20:31.909323] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:43.114 [2024-07-14 20:20:32.031351] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:43.681 20:20:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:43.681 20:20:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:43.681 20:20:32 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.GktrxbLTTF 00:20:43.941 20:20:33 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:44.200 [2024-07-14 20:20:33.209749] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:44.458 nvme0n1 00:20:44.459 20:20:33 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:44.459 Running I/O for 1 seconds... 00:20:45.391 00:20:45.391 Latency(us) 00:20:45.391 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:45.391 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:45.391 Verification LBA range: start 0x0 length 0x2000 00:20:45.391 nvme0n1 : 1.03 4349.91 16.99 0.00 0.00 29114.62 6136.55 17635.14 00:20:45.391 =================================================================================================================== 00:20:45.391 Total : 4349.91 16.99 0.00 0.00 29114.62 6136.55 17635.14 00:20:45.391 0 00:20:45.391 20:20:34 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:20:45.391 20:20:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.391 20:20:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:45.649 20:20:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.649 20:20:34 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:20:45.649 "subsystems": [ 00:20:45.649 { 00:20:45.649 "subsystem": "keyring", 00:20:45.649 "config": [ 00:20:45.649 { 00:20:45.649 "method": "keyring_file_add_key", 00:20:45.649 "params": { 00:20:45.649 "name": "key0", 00:20:45.649 "path": "/tmp/tmp.GktrxbLTTF" 00:20:45.649 } 00:20:45.649 } 00:20:45.649 ] 00:20:45.649 }, 00:20:45.649 { 00:20:45.649 "subsystem": "iobuf", 00:20:45.649 "config": [ 00:20:45.649 { 00:20:45.649 "method": "iobuf_set_options", 00:20:45.649 "params": { 00:20:45.649 "large_bufsize": 135168, 00:20:45.649 "large_pool_count": 1024, 00:20:45.649 "small_bufsize": 8192, 00:20:45.649 "small_pool_count": 8192 00:20:45.649 } 00:20:45.649 } 00:20:45.649 ] 00:20:45.649 }, 00:20:45.649 { 00:20:45.649 "subsystem": "sock", 00:20:45.649 "config": [ 00:20:45.650 { 00:20:45.650 "method": "sock_set_default_impl", 00:20:45.650 "params": { 00:20:45.650 "impl_name": "posix" 00:20:45.650 } 00:20:45.650 }, 00:20:45.650 { 00:20:45.650 "method": "sock_impl_set_options", 00:20:45.650 "params": { 00:20:45.650 "enable_ktls": false, 00:20:45.650 "enable_placement_id": 0, 00:20:45.650 "enable_quickack": false, 00:20:45.650 "enable_recv_pipe": true, 00:20:45.650 "enable_zerocopy_send_client": false, 00:20:45.650 "enable_zerocopy_send_server": true, 00:20:45.650 "impl_name": "ssl", 00:20:45.650 "recv_buf_size": 4096, 00:20:45.650 "send_buf_size": 4096, 00:20:45.650 "tls_version": 0, 00:20:45.650 "zerocopy_threshold": 0 00:20:45.650 } 00:20:45.650 }, 00:20:45.650 { 00:20:45.650 "method": "sock_impl_set_options", 00:20:45.650 "params": { 00:20:45.650 "enable_ktls": false, 00:20:45.650 "enable_placement_id": 0, 00:20:45.650 "enable_quickack": false, 00:20:45.650 "enable_recv_pipe": true, 00:20:45.650 "enable_zerocopy_send_client": false, 00:20:45.650 "enable_zerocopy_send_server": true, 00:20:45.650 "impl_name": "posix", 00:20:45.650 "recv_buf_size": 2097152, 00:20:45.650 "send_buf_size": 2097152, 00:20:45.650 "tls_version": 0, 00:20:45.650 "zerocopy_threshold": 0 00:20:45.650 } 00:20:45.650 } 00:20:45.650 ] 00:20:45.650 }, 00:20:45.650 { 00:20:45.650 "subsystem": "vmd", 00:20:45.650 "config": [] 00:20:45.650 }, 00:20:45.650 { 00:20:45.650 "subsystem": "accel", 00:20:45.650 "config": [ 00:20:45.650 { 00:20:45.650 "method": "accel_set_options", 00:20:45.650 "params": { 00:20:45.650 "buf_count": 2048, 00:20:45.650 "large_cache_size": 16, 00:20:45.650 "sequence_count": 2048, 00:20:45.650 "small_cache_size": 128, 00:20:45.650 "task_count": 2048 00:20:45.650 } 00:20:45.650 } 00:20:45.650 ] 00:20:45.650 }, 00:20:45.650 { 00:20:45.650 "subsystem": "bdev", 00:20:45.650 "config": [ 00:20:45.650 { 00:20:45.650 "method": "bdev_set_options", 00:20:45.650 "params": { 00:20:45.650 "bdev_auto_examine": true, 00:20:45.650 "bdev_io_cache_size": 256, 00:20:45.650 "bdev_io_pool_size": 65535, 00:20:45.650 "iobuf_large_cache_size": 16, 00:20:45.650 "iobuf_small_cache_size": 128 00:20:45.650 } 00:20:45.650 }, 00:20:45.650 { 00:20:45.650 "method": "bdev_raid_set_options", 00:20:45.650 "params": { 00:20:45.650 "process_window_size_kb": 1024 00:20:45.650 } 00:20:45.650 }, 00:20:45.650 { 00:20:45.650 "method": "bdev_iscsi_set_options", 00:20:45.650 "params": { 00:20:45.650 "timeout_sec": 30 00:20:45.650 } 00:20:45.650 }, 00:20:45.650 { 00:20:45.650 "method": "bdev_nvme_set_options", 00:20:45.650 "params": { 00:20:45.650 "action_on_timeout": "none", 00:20:45.650 "allow_accel_sequence": false, 00:20:45.650 "arbitration_burst": 0, 00:20:45.650 "bdev_retry_count": 3, 00:20:45.650 "ctrlr_loss_timeout_sec": 0, 00:20:45.650 "delay_cmd_submit": true, 00:20:45.650 "dhchap_dhgroups": [ 00:20:45.650 "null", 00:20:45.650 "ffdhe2048", 00:20:45.650 "ffdhe3072", 00:20:45.650 "ffdhe4096", 00:20:45.650 "ffdhe6144", 00:20:45.650 "ffdhe8192" 00:20:45.650 ], 00:20:45.650 "dhchap_digests": [ 00:20:45.650 "sha256", 00:20:45.650 "sha384", 00:20:45.650 "sha512" 00:20:45.650 ], 00:20:45.650 "disable_auto_failback": false, 00:20:45.650 "fast_io_fail_timeout_sec": 0, 00:20:45.650 "generate_uuids": false, 00:20:45.650 "high_priority_weight": 0, 00:20:45.650 "io_path_stat": false, 00:20:45.650 "io_queue_requests": 0, 00:20:45.650 "keep_alive_timeout_ms": 10000, 00:20:45.650 "low_priority_weight": 0, 00:20:45.650 "medium_priority_weight": 0, 00:20:45.650 "nvme_adminq_poll_period_us": 10000, 00:20:45.650 "nvme_error_stat": false, 00:20:45.650 "nvme_ioq_poll_period_us": 0, 00:20:45.650 "rdma_cm_event_timeout_ms": 0, 00:20:45.650 "rdma_max_cq_size": 0, 00:20:45.650 "rdma_srq_size": 0, 00:20:45.650 "reconnect_delay_sec": 0, 00:20:45.650 "timeout_admin_us": 0, 00:20:45.650 "timeout_us": 0, 00:20:45.650 "transport_ack_timeout": 0, 00:20:45.650 "transport_retry_count": 4, 00:20:45.650 "transport_tos": 0 00:20:45.650 } 00:20:45.650 }, 00:20:45.650 { 00:20:45.650 "method": "bdev_nvme_set_hotplug", 00:20:45.650 "params": { 00:20:45.650 "enable": false, 00:20:45.650 "period_us": 100000 00:20:45.650 } 00:20:45.650 }, 00:20:45.650 { 00:20:45.650 "method": "bdev_malloc_create", 00:20:45.650 "params": { 00:20:45.650 "block_size": 4096, 00:20:45.650 "name": "malloc0", 00:20:45.650 "num_blocks": 8192, 00:20:45.650 "optimal_io_boundary": 0, 00:20:45.650 "physical_block_size": 4096, 00:20:45.650 "uuid": "eac71e14-08e4-49d3-a81d-9580c5f860d4" 00:20:45.650 } 00:20:45.650 }, 00:20:45.650 { 00:20:45.650 "method": "bdev_wait_for_examine" 00:20:45.650 } 00:20:45.650 ] 00:20:45.650 }, 00:20:45.650 { 00:20:45.650 "subsystem": "nbd", 00:20:45.650 "config": [] 00:20:45.650 }, 00:20:45.650 { 00:20:45.650 "subsystem": "scheduler", 00:20:45.650 "config": [ 00:20:45.650 { 00:20:45.650 "method": "framework_set_scheduler", 00:20:45.650 "params": { 00:20:45.650 "name": "static" 00:20:45.650 } 00:20:45.650 } 00:20:45.650 ] 00:20:45.650 }, 00:20:45.650 { 00:20:45.650 "subsystem": "nvmf", 00:20:45.650 "config": [ 00:20:45.650 { 00:20:45.650 "method": "nvmf_set_config", 00:20:45.650 "params": { 00:20:45.650 "admin_cmd_passthru": { 00:20:45.650 "identify_ctrlr": false 00:20:45.650 }, 00:20:45.650 "discovery_filter": "match_any" 00:20:45.650 } 00:20:45.650 }, 00:20:45.650 { 00:20:45.651 "method": "nvmf_set_max_subsystems", 00:20:45.651 "params": { 00:20:45.651 "max_subsystems": 1024 00:20:45.651 } 00:20:45.651 }, 00:20:45.651 { 00:20:45.651 "method": "nvmf_set_crdt", 00:20:45.651 "params": { 00:20:45.651 "crdt1": 0, 00:20:45.651 "crdt2": 0, 00:20:45.651 "crdt3": 0 00:20:45.651 } 00:20:45.651 }, 00:20:45.651 { 00:20:45.651 "method": "nvmf_create_transport", 00:20:45.651 "params": { 00:20:45.651 "abort_timeout_sec": 1, 00:20:45.651 "ack_timeout": 0, 00:20:45.651 "buf_cache_size": 4294967295, 00:20:45.651 "c2h_success": false, 00:20:45.651 "data_wr_pool_size": 0, 00:20:45.651 "dif_insert_or_strip": false, 00:20:45.651 "in_capsule_data_size": 4096, 00:20:45.651 "io_unit_size": 131072, 00:20:45.651 "max_aq_depth": 128, 00:20:45.651 "max_io_qpairs_per_ctrlr": 127, 00:20:45.651 "max_io_size": 131072, 00:20:45.651 "max_queue_depth": 128, 00:20:45.651 "num_shared_buffers": 511, 00:20:45.651 "sock_priority": 0, 00:20:45.651 "trtype": "TCP", 00:20:45.651 "zcopy": false 00:20:45.651 } 00:20:45.651 }, 00:20:45.651 { 00:20:45.651 "method": "nvmf_create_subsystem", 00:20:45.651 "params": { 00:20:45.651 "allow_any_host": false, 00:20:45.651 "ana_reporting": false, 00:20:45.651 "max_cntlid": 65519, 00:20:45.651 "max_namespaces": 32, 00:20:45.651 "min_cntlid": 1, 00:20:45.651 "model_number": "SPDK bdev Controller", 00:20:45.651 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:45.651 "serial_number": "00000000000000000000" 00:20:45.651 } 00:20:45.651 }, 00:20:45.651 { 00:20:45.651 "method": "nvmf_subsystem_add_host", 00:20:45.651 "params": { 00:20:45.651 "host": "nqn.2016-06.io.spdk:host1", 00:20:45.651 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:45.651 "psk": "key0" 00:20:45.651 } 00:20:45.651 }, 00:20:45.651 { 00:20:45.651 "method": "nvmf_subsystem_add_ns", 00:20:45.651 "params": { 00:20:45.651 "namespace": { 00:20:45.651 "bdev_name": "malloc0", 00:20:45.651 "nguid": "EAC71E1408E449D3A81D9580C5F860D4", 00:20:45.651 "no_auto_visible": false, 00:20:45.651 "nsid": 1, 00:20:45.651 "uuid": "eac71e14-08e4-49d3-a81d-9580c5f860d4" 00:20:45.651 }, 00:20:45.651 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:20:45.651 } 00:20:45.651 }, 00:20:45.651 { 00:20:45.651 "method": "nvmf_subsystem_add_listener", 00:20:45.651 "params": { 00:20:45.651 "listen_address": { 00:20:45.651 "adrfam": "IPv4", 00:20:45.651 "traddr": "10.0.0.2", 00:20:45.651 "trsvcid": "4420", 00:20:45.651 "trtype": "TCP" 00:20:45.651 }, 00:20:45.651 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:45.651 "secure_channel": true 00:20:45.651 } 00:20:45.651 } 00:20:45.651 ] 00:20:45.651 } 00:20:45.651 ] 00:20:45.651 }' 00:20:45.651 20:20:34 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:45.910 20:20:34 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:20:45.910 "subsystems": [ 00:20:45.910 { 00:20:45.910 "subsystem": "keyring", 00:20:45.910 "config": [ 00:20:45.910 { 00:20:45.910 "method": "keyring_file_add_key", 00:20:45.910 "params": { 00:20:45.910 "name": "key0", 00:20:45.910 "path": "/tmp/tmp.GktrxbLTTF" 00:20:45.910 } 00:20:45.910 } 00:20:45.910 ] 00:20:45.910 }, 00:20:45.910 { 00:20:45.910 "subsystem": "iobuf", 00:20:45.910 "config": [ 00:20:45.910 { 00:20:45.910 "method": "iobuf_set_options", 00:20:45.910 "params": { 00:20:45.910 "large_bufsize": 135168, 00:20:45.910 "large_pool_count": 1024, 00:20:45.910 "small_bufsize": 8192, 00:20:45.910 "small_pool_count": 8192 00:20:45.910 } 00:20:45.910 } 00:20:45.910 ] 00:20:45.910 }, 00:20:45.910 { 00:20:45.910 "subsystem": "sock", 00:20:45.910 "config": [ 00:20:45.910 { 00:20:45.910 "method": "sock_set_default_impl", 00:20:45.910 "params": { 00:20:45.910 "impl_name": "posix" 00:20:45.910 } 00:20:45.910 }, 00:20:45.910 { 00:20:45.910 "method": "sock_impl_set_options", 00:20:45.910 "params": { 00:20:45.910 "enable_ktls": false, 00:20:45.910 "enable_placement_id": 0, 00:20:45.910 "enable_quickack": false, 00:20:45.910 "enable_recv_pipe": true, 00:20:45.910 "enable_zerocopy_send_client": false, 00:20:45.910 "enable_zerocopy_send_server": true, 00:20:45.910 "impl_name": "ssl", 00:20:45.910 "recv_buf_size": 4096, 00:20:45.910 "send_buf_size": 4096, 00:20:45.910 "tls_version": 0, 00:20:45.910 "zerocopy_threshold": 0 00:20:45.910 } 00:20:45.910 }, 00:20:45.910 { 00:20:45.910 "method": "sock_impl_set_options", 00:20:45.910 "params": { 00:20:45.910 "enable_ktls": false, 00:20:45.910 "enable_placement_id": 0, 00:20:45.910 "enable_quickack": false, 00:20:45.910 "enable_recv_pipe": true, 00:20:45.910 "enable_zerocopy_send_client": false, 00:20:45.910 "enable_zerocopy_send_server": true, 00:20:45.910 "impl_name": "posix", 00:20:45.910 "recv_buf_size": 2097152, 00:20:45.910 "send_buf_size": 2097152, 00:20:45.910 "tls_version": 0, 00:20:45.910 "zerocopy_threshold": 0 00:20:45.910 } 00:20:45.910 } 00:20:45.910 ] 00:20:45.910 }, 00:20:45.910 { 00:20:45.910 "subsystem": "vmd", 00:20:45.910 "config": [] 00:20:45.910 }, 00:20:45.910 { 00:20:45.910 "subsystem": "accel", 00:20:45.910 "config": [ 00:20:45.910 { 00:20:45.910 "method": "accel_set_options", 00:20:45.910 "params": { 00:20:45.910 "buf_count": 2048, 00:20:45.910 "large_cache_size": 16, 00:20:45.910 "sequence_count": 2048, 00:20:45.910 "small_cache_size": 128, 00:20:45.910 "task_count": 2048 00:20:45.910 } 00:20:45.910 } 00:20:45.910 ] 00:20:45.910 }, 00:20:45.910 { 00:20:45.910 "subsystem": "bdev", 00:20:45.910 "config": [ 00:20:45.910 { 00:20:45.910 "method": "bdev_set_options", 00:20:45.910 "params": { 00:20:45.910 "bdev_auto_examine": true, 00:20:45.910 "bdev_io_cache_size": 256, 00:20:45.910 "bdev_io_pool_size": 65535, 00:20:45.910 "iobuf_large_cache_size": 16, 00:20:45.911 "iobuf_small_cache_size": 128 00:20:45.911 } 00:20:45.911 }, 00:20:45.911 { 00:20:45.911 "method": "bdev_raid_set_options", 00:20:45.911 "params": { 00:20:45.911 "process_window_size_kb": 1024 00:20:45.911 } 00:20:45.911 }, 00:20:45.911 { 00:20:45.911 "method": "bdev_iscsi_set_options", 00:20:45.911 "params": { 00:20:45.911 "timeout_sec": 30 00:20:45.911 } 00:20:45.911 }, 00:20:45.911 { 00:20:45.911 "method": "bdev_nvme_set_options", 00:20:45.911 "params": { 00:20:45.911 "action_on_timeout": "none", 00:20:45.911 "allow_accel_sequence": false, 00:20:45.911 "arbitration_burst": 0, 00:20:45.911 "bdev_retry_count": 3, 00:20:45.911 "ctrlr_loss_timeout_sec": 0, 00:20:45.911 "delay_cmd_submit": true, 00:20:45.911 "dhchap_dhgroups": [ 00:20:45.911 "null", 00:20:45.911 "ffdhe2048", 00:20:45.911 "ffdhe3072", 00:20:45.911 "ffdhe4096", 00:20:45.911 "ffdhe6144", 00:20:45.911 "ffdhe8192" 00:20:45.911 ], 00:20:45.911 "dhchap_digests": [ 00:20:45.911 "sha256", 00:20:45.911 "sha384", 00:20:45.911 "sha512" 00:20:45.911 ], 00:20:45.911 "disable_auto_failback": false, 00:20:45.911 "fast_io_fail_timeout_sec": 0, 00:20:45.911 "generate_uuids": false, 00:20:45.911 "high_priority_weight": 0, 00:20:45.911 "io_path_stat": false, 00:20:45.911 "io_queue_requests": 512, 00:20:45.911 "keep_alive_timeout_ms": 10000, 00:20:45.911 "low_priority_weight": 0, 00:20:45.911 "medium_priority_weight": 0, 00:20:45.911 "nvme_adminq_poll_period_us": 10000, 00:20:45.911 "nvme_error_stat": false, 00:20:45.911 "nvme_ioq_poll_period_us": 0, 00:20:45.911 "rdma_cm_event_timeout_ms": 0, 00:20:45.911 "rdma_max_cq_size": 0, 00:20:45.911 "rdma_srq_size": 0, 00:20:45.911 "reconnect_delay_sec": 0, 00:20:45.911 "timeout_admin_us": 0, 00:20:45.911 "timeout_us": 0, 00:20:45.911 "transport_ack_timeout": 0, 00:20:45.911 "transport_retry_count": 4, 00:20:45.911 "transport_tos": 0 00:20:45.911 } 00:20:45.911 }, 00:20:45.911 { 00:20:45.911 "method": "bdev_nvme_attach_controller", 00:20:45.911 "params": { 00:20:45.911 "adrfam": "IPv4", 00:20:45.911 "ctrlr_loss_timeout_sec": 0, 00:20:45.911 "ddgst": false, 00:20:45.911 "fast_io_fail_timeout_sec": 0, 00:20:45.911 "hdgst": false, 00:20:45.911 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:45.911 "name": "nvme0", 00:20:45.911 "prchk_guard": false, 00:20:45.911 "prchk_reftag": false, 00:20:45.911 "psk": "key0", 00:20:45.911 "reconnect_delay_sec": 0, 00:20:45.911 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:45.911 "traddr": "10.0.0.2", 00:20:45.911 "trsvcid": "4420", 00:20:45.911 "trtype": "TCP" 00:20:45.911 } 00:20:45.911 }, 00:20:45.911 { 00:20:45.911 "method": "bdev_nvme_set_hotplug", 00:20:45.911 "params": { 00:20:45.911 "enable": false, 00:20:45.911 "period_us": 100000 00:20:45.911 } 00:20:45.911 }, 00:20:45.911 { 00:20:45.911 "method": "bdev_enable_histogram", 00:20:45.911 "params": { 00:20:45.911 "enable": true, 00:20:45.911 "name": "nvme0n1" 00:20:45.911 } 00:20:45.911 }, 00:20:45.911 { 00:20:45.911 "method": "bdev_wait_for_examine" 00:20:45.911 } 00:20:45.911 ] 00:20:45.911 }, 00:20:45.911 { 00:20:45.911 "subsystem": "nbd", 00:20:45.911 "config": [] 00:20:45.911 } 00:20:45.911 ] 00:20:45.911 }' 00:20:45.911 20:20:34 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 100763 00:20:45.911 20:20:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 100763 ']' 00:20:45.911 20:20:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 100763 00:20:45.911 20:20:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:45.911 20:20:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:45.911 20:20:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 100763 00:20:45.911 20:20:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:20:45.911 20:20:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:20:45.911 20:20:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 100763' 00:20:45.911 killing process with pid 100763 00:20:45.911 20:20:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 100763 00:20:45.911 Received shutdown signal, test time was about 1.000000 seconds 00:20:45.911 00:20:45.911 Latency(us) 00:20:45.911 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:45.911 =================================================================================================================== 00:20:45.911 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:45.911 20:20:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 100763 00:20:46.170 20:20:35 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 100713 00:20:46.170 20:20:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 100713 ']' 00:20:46.170 20:20:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 100713 00:20:46.170 20:20:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:46.170 20:20:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:46.170 20:20:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 100713 00:20:46.170 20:20:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:46.170 20:20:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:46.170 killing process with pid 100713 00:20:46.170 20:20:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 100713' 00:20:46.170 20:20:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 100713 00:20:46.170 20:20:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 100713 00:20:46.739 20:20:35 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:20:46.739 20:20:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:46.739 20:20:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:46.739 20:20:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:46.739 20:20:35 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:20:46.739 "subsystems": [ 00:20:46.739 { 00:20:46.739 "subsystem": "keyring", 00:20:46.739 "config": [ 00:20:46.739 { 00:20:46.739 "method": "keyring_file_add_key", 00:20:46.739 "params": { 00:20:46.739 "name": "key0", 00:20:46.739 "path": "/tmp/tmp.GktrxbLTTF" 00:20:46.739 } 00:20:46.739 } 00:20:46.739 ] 00:20:46.739 }, 00:20:46.739 { 00:20:46.739 "subsystem": "iobuf", 00:20:46.739 "config": [ 00:20:46.739 { 00:20:46.739 "method": "iobuf_set_options", 00:20:46.739 "params": { 00:20:46.739 "large_bufsize": 135168, 00:20:46.739 "large_pool_count": 1024, 00:20:46.739 "small_bufsize": 8192, 00:20:46.739 "small_pool_count": 8192 00:20:46.739 } 00:20:46.739 } 00:20:46.739 ] 00:20:46.739 }, 00:20:46.739 { 00:20:46.739 "subsystem": "sock", 00:20:46.739 "config": [ 00:20:46.739 { 00:20:46.739 "method": "sock_set_default_impl", 00:20:46.739 "params": { 00:20:46.739 "impl_name": "posix" 00:20:46.739 } 00:20:46.739 }, 00:20:46.739 { 00:20:46.739 "method": "sock_impl_set_options", 00:20:46.739 "params": { 00:20:46.739 "enable_ktls": false, 00:20:46.739 "enable_placement_id": 0, 00:20:46.739 "enable_quickack": false, 00:20:46.739 "enable_recv_pipe": true, 00:20:46.739 "enable_zerocopy_send_client": false, 00:20:46.739 "enable_zerocopy_send_server": true, 00:20:46.739 "impl_name": "ssl", 00:20:46.739 "recv_buf_size": 4096, 00:20:46.739 "send_buf_size": 4096, 00:20:46.739 "tls_version": 0, 00:20:46.739 "zerocopy_threshold": 0 00:20:46.739 } 00:20:46.739 }, 00:20:46.739 { 00:20:46.739 "method": "sock_impl_set_options", 00:20:46.739 "params": { 00:20:46.739 "enable_ktls": false, 00:20:46.739 "enable_placement_id": 0, 00:20:46.739 "enable_quickack": false, 00:20:46.739 "enable_recv_pipe": true, 00:20:46.739 "enable_zerocopy_send_client": false, 00:20:46.739 "enable_zerocopy_send_server": true, 00:20:46.739 "impl_name": "posix", 00:20:46.739 "recv_buf_size": 2097152, 00:20:46.739 "send_buf_size": 2097152, 00:20:46.739 "tls_version": 0, 00:20:46.739 "zerocopy_threshold": 0 00:20:46.739 } 00:20:46.739 } 00:20:46.739 ] 00:20:46.739 }, 00:20:46.739 { 00:20:46.739 "subsystem": "vmd", 00:20:46.739 "config": [] 00:20:46.739 }, 00:20:46.739 { 00:20:46.739 "subsystem": "accel", 00:20:46.739 "config": [ 00:20:46.739 { 00:20:46.739 "method": "accel_set_options", 00:20:46.739 "params": { 00:20:46.739 "buf_count": 2048, 00:20:46.739 "large_cache_size": 16, 00:20:46.739 "sequence_count": 2048, 00:20:46.739 "small_cache_size": 128, 00:20:46.739 "task_count": 2048 00:20:46.739 } 00:20:46.739 } 00:20:46.739 ] 00:20:46.739 }, 00:20:46.739 { 00:20:46.739 "subsystem": "bdev", 00:20:46.739 "config": [ 00:20:46.739 { 00:20:46.739 "method": "bdev_set_options", 00:20:46.739 "params": { 00:20:46.739 "bdev_auto_examine": true, 00:20:46.739 "bdev_io_cache_size": 256, 00:20:46.739 "bdev_io_pool_size": 65535, 00:20:46.739 "iobuf_large_cache_size": 16, 00:20:46.739 "iobuf_small_cache_size": 128 00:20:46.739 } 00:20:46.739 }, 00:20:46.739 { 00:20:46.739 "method": "bdev_raid_set_options", 00:20:46.739 "params": { 00:20:46.739 "process_window_size_kb": 1024 00:20:46.739 } 00:20:46.739 }, 00:20:46.739 { 00:20:46.739 "method": "bdev_iscsi_set_options", 00:20:46.739 "params": { 00:20:46.739 "timeout_sec": 30 00:20:46.739 } 00:20:46.739 }, 00:20:46.739 { 00:20:46.739 "method": "bdev_nvme_set_options", 00:20:46.739 "params": { 00:20:46.739 "action_on_timeout": "none", 00:20:46.739 "allow_accel_sequence": false, 00:20:46.739 "arbitration_burst": 0, 00:20:46.739 "bdev_retry_count": 3, 00:20:46.739 "ctrlr_loss_timeout_sec": 0, 00:20:46.739 "delay_cmd_submit": true, 00:20:46.739 "dhchap_dhgroups": [ 00:20:46.740 "null", 00:20:46.740 "ffdhe2048", 00:20:46.740 "ffdhe3072", 00:20:46.740 "ffdhe4096", 00:20:46.740 "ffdhe6144", 00:20:46.740 "ffdhe8192" 00:20:46.740 ], 00:20:46.740 "dhchap_digests": [ 00:20:46.740 "sha256", 00:20:46.740 "sha384", 00:20:46.740 "sha512" 00:20:46.740 ], 00:20:46.740 "disable_auto_failback": false, 00:20:46.740 "fast_io_fail_timeout_sec": 0, 00:20:46.740 "generate_uuids": false, 00:20:46.740 "high_priority_weight": 0, 00:20:46.740 "io_path_stat": false, 00:20:46.740 "io_queue_requests": 0, 00:20:46.740 "keep_alive_timeout_ms": 10000, 00:20:46.740 "low_priority_weight": 0, 00:20:46.740 "medium_priority_weight": 0, 00:20:46.740 "nvme_adminq_poll_period_us": 10000, 00:20:46.740 "nvme_error_stat": false, 00:20:46.740 "nvme_ioq_poll_period_us": 0, 00:20:46.740 "rdma_cm_event_timeout_ms": 0, 00:20:46.740 "rdma_max_cq_size": 0, 00:20:46.740 "rdma_srq_size": 0, 00:20:46.740 "reconnect_delay_sec": 0, 00:20:46.740 "timeout_admin_us": 0, 00:20:46.740 "timeout_us": 0, 00:20:46.740 "transport_ack_timeout": 0, 00:20:46.740 "transport_retry_count": 4, 00:20:46.740 "transport_tos": 0 00:20:46.740 } 00:20:46.740 }, 00:20:46.740 { 00:20:46.740 "method": "bdev_nvme_set_hotplug", 00:20:46.740 "params": { 00:20:46.740 "enable": false, 00:20:46.740 "period_us": 100000 00:20:46.740 } 00:20:46.740 }, 00:20:46.740 { 00:20:46.740 "method": "bdev_malloc_create", 00:20:46.740 "params": { 00:20:46.740 "block_size": 4096, 00:20:46.740 "name": "malloc0", 00:20:46.740 "num_blocks": 8192, 00:20:46.740 "optimal_io_boundary": 0, 00:20:46.740 "physical_block_size": 4096, 00:20:46.740 "uuid": "eac71e14-08e4-49d3-a81d-9580c5f860d4" 00:20:46.740 } 00:20:46.740 }, 00:20:46.740 { 00:20:46.740 "method": "bdev_wait_for_examine" 00:20:46.740 } 00:20:46.740 ] 00:20:46.740 }, 00:20:46.740 { 00:20:46.740 "subsystem": "nbd", 00:20:46.740 "config": [] 00:20:46.740 }, 00:20:46.740 { 00:20:46.740 "subsystem": "scheduler", 00:20:46.740 "config": [ 00:20:46.740 { 00:20:46.740 "method": "framework_set_scheduler", 00:20:46.740 "params": { 00:20:46.740 "name": "static" 00:20:46.740 } 00:20:46.740 } 00:20:46.740 ] 00:20:46.740 }, 00:20:46.740 { 00:20:46.740 "subsystem": "nvmf", 00:20:46.740 "config": [ 00:20:46.740 { 00:20:46.740 "method": "nvmf_set_config", 00:20:46.740 "params": { 00:20:46.740 "admin_cmd_passthru": { 00:20:46.740 "identify_ctrlr": false 00:20:46.740 }, 00:20:46.740 "discovery_filter": "match_any" 00:20:46.740 } 00:20:46.740 }, 00:20:46.740 { 00:20:46.740 "method": "nvmf_set_max_subsystems", 00:20:46.740 "params": { 00:20:46.740 "max_subsystems": 1024 00:20:46.740 } 00:20:46.740 }, 00:20:46.740 { 00:20:46.740 "method": "nvmf_set_crdt", 00:20:46.740 "params": { 00:20:46.740 "crdt1": 0, 00:20:46.740 "crdt2": 0, 00:20:46.740 "crdt3": 0 00:20:46.740 } 00:20:46.740 }, 00:20:46.740 { 00:20:46.740 "method": "nvmf_create_transport", 00:20:46.740 "params": { 00:20:46.740 "abort_timeout_sec": 1, 00:20:46.740 "ack_timeout": 0, 00:20:46.740 "buf_cache_size": 4294967295, 00:20:46.740 "c2h_success": false, 00:20:46.740 "data_wr_pool_size": 0, 00:20:46.740 "dif_insert_or_strip": false, 00:20:46.740 "in_capsule_data_size": 4096, 00:20:46.740 "io_unit_size": 131072, 00:20:46.740 "max_aq_depth": 128, 00:20:46.740 "max_io_qpairs_per_ctrlr": 127, 00:20:46.740 "max_io_size": 131072, 00:20:46.740 "max_queue_depth": 128, 00:20:46.740 "num_shared_buffers": 511, 00:20:46.740 "sock_priority": 0, 00:20:46.740 "trtype": "TCP", 00:20:46.740 "zcopy": false 00:20:46.740 } 00:20:46.740 }, 00:20:46.740 { 00:20:46.740 "method": "nvmf_create_subsystem", 00:20:46.740 "params": { 00:20:46.740 "allow_any_host": false, 00:20:46.740 "ana_reporting": false, 00:20:46.740 "max_cntlid": 65519, 00:20:46.740 "max_namespaces": 32, 00:20:46.740 "min_cntlid": 1, 00:20:46.740 "model_number": "SPDK bdev Controller", 00:20:46.740 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:46.740 "serial_number": "00000000000000000000" 00:20:46.740 } 00:20:46.740 }, 00:20:46.740 { 00:20:46.740 "method": "nvmf_subsystem_add_host", 00:20:46.740 "params": { 00:20:46.740 "host": "nqn.2016-06.io.spdk:host1", 00:20:46.740 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:46.740 "psk": "key0" 00:20:46.740 } 00:20:46.740 }, 00:20:46.740 { 00:20:46.740 "method": "nvmf_subsystem_add_ns", 00:20:46.740 "params": { 00:20:46.740 "namespace": { 00:20:46.740 "bdev_name": "malloc0", 00:20:46.740 "nguid": "EAC71E1408E449D3A81D9580C5F860D4", 00:20:46.740 "no_auto_visible": false, 00:20:46.740 "nsid": 1, 00:20:46.740 "uuid": "eac71e14-08e4-49d3-a81d-9580c5f860d4" 00:20:46.740 }, 00:20:46.740 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:20:46.740 } 00:20:46.740 }, 00:20:46.740 { 00:20:46.740 "method": "nvmf_subsystem_add_listener", 00:20:46.740 "params": { 00:20:46.740 "listen_address": { 00:20:46.740 "adrfam": "IPv4", 00:20:46.740 "traddr": "10.0.0.2", 00:20:46.740 "trsvcid": "4420", 00:20:46.740 "trtype": "TCP" 00:20:46.740 }, 00:20:46.740 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:46.740 "secure_channel": true 00:20:46.740 } 00:20:46.740 } 00:20:46.740 ] 00:20:46.740 } 00:20:46.740 ] 00:20:46.740 }' 00:20:46.740 20:20:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=100852 00:20:46.740 20:20:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:20:46.740 20:20:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 100852 00:20:46.740 20:20:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 100852 ']' 00:20:46.740 20:20:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:46.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:46.740 20:20:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:46.740 20:20:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:46.740 20:20:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:46.740 20:20:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:46.740 [2024-07-14 20:20:35.589050] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:20:46.740 [2024-07-14 20:20:35.589126] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:46.740 [2024-07-14 20:20:35.720669] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:46.740 [2024-07-14 20:20:35.816754] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:46.740 [2024-07-14 20:20:35.816813] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:46.740 [2024-07-14 20:20:35.816823] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:46.740 [2024-07-14 20:20:35.816841] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:46.740 [2024-07-14 20:20:35.816847] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:46.740 [2024-07-14 20:20:35.816948] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:47.308 [2024-07-14 20:20:36.086684] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:47.308 [2024-07-14 20:20:36.118620] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:47.308 [2024-07-14 20:20:36.118839] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:47.568 20:20:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:47.568 20:20:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:47.568 20:20:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:47.568 20:20:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:47.568 20:20:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:47.568 20:20:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:47.568 20:20:36 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=100898 00:20:47.568 20:20:36 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 100898 /var/tmp/bdevperf.sock 00:20:47.568 20:20:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 100898 ']' 00:20:47.568 20:20:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:47.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:47.568 20:20:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:47.568 20:20:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:47.568 20:20:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:47.568 20:20:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:47.568 20:20:36 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:20:47.568 20:20:36 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:20:47.568 "subsystems": [ 00:20:47.568 { 00:20:47.568 "subsystem": "keyring", 00:20:47.568 "config": [ 00:20:47.568 { 00:20:47.568 "method": "keyring_file_add_key", 00:20:47.568 "params": { 00:20:47.568 "name": "key0", 00:20:47.568 "path": "/tmp/tmp.GktrxbLTTF" 00:20:47.568 } 00:20:47.568 } 00:20:47.568 ] 00:20:47.568 }, 00:20:47.568 { 00:20:47.568 "subsystem": "iobuf", 00:20:47.568 "config": [ 00:20:47.568 { 00:20:47.568 "method": "iobuf_set_options", 00:20:47.568 "params": { 00:20:47.568 "large_bufsize": 135168, 00:20:47.568 "large_pool_count": 1024, 00:20:47.568 "small_bufsize": 8192, 00:20:47.568 "small_pool_count": 8192 00:20:47.568 } 00:20:47.568 } 00:20:47.568 ] 00:20:47.568 }, 00:20:47.568 { 00:20:47.568 "subsystem": "sock", 00:20:47.568 "config": [ 00:20:47.568 { 00:20:47.568 "method": "sock_set_default_impl", 00:20:47.568 "params": { 00:20:47.568 "impl_name": "posix" 00:20:47.568 } 00:20:47.568 }, 00:20:47.568 { 00:20:47.568 "method": "sock_impl_set_options", 00:20:47.568 "params": { 00:20:47.568 "enable_ktls": false, 00:20:47.568 "enable_placement_id": 0, 00:20:47.568 "enable_quickack": false, 00:20:47.568 "enable_recv_pipe": true, 00:20:47.568 "enable_zerocopy_send_client": false, 00:20:47.568 "enable_zerocopy_send_server": true, 00:20:47.568 "impl_name": "ssl", 00:20:47.568 "recv_buf_size": 4096, 00:20:47.568 "send_buf_size": 4096, 00:20:47.568 "tls_version": 0, 00:20:47.568 "zerocopy_threshold": 0 00:20:47.568 } 00:20:47.568 }, 00:20:47.568 { 00:20:47.568 "method": "sock_impl_set_options", 00:20:47.568 "params": { 00:20:47.568 "enable_ktls": false, 00:20:47.568 "enable_placement_id": 0, 00:20:47.568 "enable_quickack": false, 00:20:47.568 "enable_recv_pipe": true, 00:20:47.568 "enable_zerocopy_send_client": false, 00:20:47.568 "enable_zerocopy_send_server": true, 00:20:47.568 "impl_name": "posix", 00:20:47.568 "recv_buf_size": 2097152, 00:20:47.568 "send_buf_size": 2097152, 00:20:47.568 "tls_version": 0, 00:20:47.568 "zerocopy_threshold": 0 00:20:47.568 } 00:20:47.568 } 00:20:47.568 ] 00:20:47.568 }, 00:20:47.568 { 00:20:47.568 "subsystem": "vmd", 00:20:47.568 "config": [] 00:20:47.568 }, 00:20:47.568 { 00:20:47.568 "subsystem": "accel", 00:20:47.568 "config": [ 00:20:47.568 { 00:20:47.568 "method": "accel_set_options", 00:20:47.568 "params": { 00:20:47.568 "buf_count": 2048, 00:20:47.568 "large_cache_size": 16, 00:20:47.568 "sequence_count": 2048, 00:20:47.568 "small_cache_size": 128, 00:20:47.568 "task_count": 2048 00:20:47.568 } 00:20:47.568 } 00:20:47.568 ] 00:20:47.568 }, 00:20:47.568 { 00:20:47.568 "subsystem": "bdev", 00:20:47.568 "config": [ 00:20:47.568 { 00:20:47.568 "method": "bdev_set_options", 00:20:47.568 "params": { 00:20:47.568 "bdev_auto_examine": true, 00:20:47.568 "bdev_io_cache_size": 256, 00:20:47.568 "bdev_io_pool_size": 65535, 00:20:47.568 "iobuf_large_cache_size": 16, 00:20:47.568 "iobuf_small_cache_size": 128 00:20:47.568 } 00:20:47.568 }, 00:20:47.568 { 00:20:47.568 "method": "bdev_raid_set_options", 00:20:47.568 "params": { 00:20:47.568 "process_window_size_kb": 1024 00:20:47.568 } 00:20:47.568 }, 00:20:47.568 { 00:20:47.568 "method": "bdev_iscsi_set_options", 00:20:47.568 "params": { 00:20:47.568 "timeout_sec": 30 00:20:47.568 } 00:20:47.568 }, 00:20:47.568 { 00:20:47.568 "method": "bdev_nvme_set_options", 00:20:47.568 "params": { 00:20:47.568 "action_on_timeout": "none", 00:20:47.569 "allow_accel_sequence": false, 00:20:47.569 "arbitration_burst": 0, 00:20:47.569 "bdev_retry_count": 3, 00:20:47.569 "ctrlr_loss_timeout_sec": 0, 00:20:47.569 "delay_cmd_submit": true, 00:20:47.569 "dhchap_dhgroups": [ 00:20:47.569 "null", 00:20:47.569 "ffdhe2048", 00:20:47.569 "ffdhe3072", 00:20:47.569 "ffdhe4096", 00:20:47.569 "ffdhe6144", 00:20:47.569 "ffdhe8192" 00:20:47.569 ], 00:20:47.569 "dhchap_digests": [ 00:20:47.569 "sha256", 00:20:47.569 "sha384", 00:20:47.569 "sha512" 00:20:47.569 ], 00:20:47.569 "disable_auto_failback": false, 00:20:47.569 "fast_io_fail_timeout_sec": 0, 00:20:47.569 "generate_uuids": false, 00:20:47.569 "high_priority_weight": 0, 00:20:47.569 "io_path_stat": false, 00:20:47.569 "io_queue_requests": 512, 00:20:47.569 "keep_alive_timeout_ms": 10000, 00:20:47.569 "low_priority_weight": 0, 00:20:47.569 "medium_priority_weight": 0, 00:20:47.569 "nvme_adminq_poll_period_us": 10000, 00:20:47.569 "nvme_error_stat": false, 00:20:47.569 "nvme_ioq_poll_period_us": 0, 00:20:47.569 "rdma_cm_event_timeout_ms": 0, 00:20:47.569 "rdma_max_cq_size": 0, 00:20:47.569 "rdma_srq_size": 0, 00:20:47.569 "reconnect_delay_sec": 0, 00:20:47.569 "timeout_admin_us": 0, 00:20:47.569 "timeout_us": 0, 00:20:47.569 "transport_ack_timeout": 0, 00:20:47.569 "transport_retry_count": 4, 00:20:47.569 "transport_tos": 0 00:20:47.569 } 00:20:47.569 }, 00:20:47.569 { 00:20:47.569 "method": "bdev_nvme_attach_controller", 00:20:47.569 "params": { 00:20:47.569 "adrfam": "IPv4", 00:20:47.569 "ctrlr_loss_timeout_sec": 0, 00:20:47.569 "ddgst": false, 00:20:47.569 "fast_io_fail_timeout_sec": 0, 00:20:47.569 "hdgst": false, 00:20:47.569 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:47.569 "name": "nvme0", 00:20:47.569 "prchk_guard": false, 00:20:47.569 "prchk_reftag": false, 00:20:47.569 "psk": "key0", 00:20:47.569 "reconnect_delay_sec": 0, 00:20:47.569 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:47.569 "traddr": "10.0.0.2", 00:20:47.569 "trsvcid": "4420", 00:20:47.569 "trtype": "TCP" 00:20:47.569 } 00:20:47.569 }, 00:20:47.569 { 00:20:47.569 "method": "bdev_nvme_set_hotplug", 00:20:47.569 "params": { 00:20:47.569 "enable": false, 00:20:47.569 "period_us": 100000 00:20:47.569 } 00:20:47.569 }, 00:20:47.569 { 00:20:47.569 "method": "bdev_enable_histogram", 00:20:47.569 "params": { 00:20:47.569 "enable": true, 00:20:47.569 "name": "nvme0n1" 00:20:47.569 } 00:20:47.569 }, 00:20:47.569 { 00:20:47.569 "method": "bdev_wait_for_examine" 00:20:47.569 } 00:20:47.569 ] 00:20:47.569 }, 00:20:47.569 { 00:20:47.569 "subsystem": "nbd", 00:20:47.569 "config": [] 00:20:47.569 } 00:20:47.569 ] 00:20:47.569 }' 00:20:47.827 [2024-07-14 20:20:36.682841] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:20:47.827 [2024-07-14 20:20:36.682998] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100898 ] 00:20:47.827 [2024-07-14 20:20:36.826531] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:48.085 [2024-07-14 20:20:36.939938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:48.085 [2024-07-14 20:20:37.144799] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:48.653 20:20:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:48.653 20:20:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:48.653 20:20:37 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:20:48.653 20:20:37 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:48.912 20:20:37 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.912 20:20:37 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:49.172 Running I/O for 1 seconds... 00:20:50.162 00:20:50.162 Latency(us) 00:20:50.162 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:50.162 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:50.162 Verification LBA range: start 0x0 length 0x2000 00:20:50.162 nvme0n1 : 1.02 4137.72 16.16 0.00 0.00 30619.61 8698.41 20256.58 00:20:50.162 =================================================================================================================== 00:20:50.162 Total : 4137.72 16.16 0.00 0.00 30619.61 8698.41 20256.58 00:20:50.162 0 00:20:50.162 20:20:39 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:20:50.162 20:20:39 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:20:50.162 20:20:39 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:20:50.162 20:20:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@804 -- # type=--id 00:20:50.162 20:20:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@805 -- # id=0 00:20:50.162 20:20:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:20:50.162 20:20:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:50.162 20:20:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:20:50.162 20:20:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:20:50.162 20:20:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@816 -- # for n in $shm_files 00:20:50.162 20:20:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:50.162 nvmf_trace.0 00:20:50.162 20:20:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # return 0 00:20:50.162 20:20:39 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 100898 00:20:50.162 20:20:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 100898 ']' 00:20:50.162 20:20:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 100898 00:20:50.162 20:20:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:50.162 20:20:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:50.162 20:20:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 100898 00:20:50.162 killing process with pid 100898 00:20:50.162 Received shutdown signal, test time was about 1.000000 seconds 00:20:50.162 00:20:50.162 Latency(us) 00:20:50.162 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:50.162 =================================================================================================================== 00:20:50.162 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:50.162 20:20:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:20:50.162 20:20:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:20:50.162 20:20:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 100898' 00:20:50.162 20:20:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 100898 00:20:50.162 20:20:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 100898 00:20:50.421 20:20:39 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:20:50.421 20:20:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:50.421 20:20:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:20:50.680 20:20:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:50.680 20:20:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:20:50.680 20:20:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:50.680 20:20:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:50.680 rmmod nvme_tcp 00:20:50.680 rmmod nvme_fabrics 00:20:50.680 rmmod nvme_keyring 00:20:50.680 20:20:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:50.680 20:20:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:20:50.680 20:20:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:20:50.680 20:20:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 100852 ']' 00:20:50.680 20:20:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 100852 00:20:50.680 20:20:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 100852 ']' 00:20:50.680 20:20:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 100852 00:20:50.680 20:20:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:50.680 20:20:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:50.680 20:20:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 100852 00:20:50.680 killing process with pid 100852 00:20:50.680 20:20:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:50.680 20:20:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:50.680 20:20:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 100852' 00:20:50.680 20:20:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 100852 00:20:50.680 20:20:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 100852 00:20:50.940 20:20:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:50.940 20:20:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:50.940 20:20:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:50.940 20:20:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:50.940 20:20:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:50.940 20:20:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:50.940 20:20:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:50.940 20:20:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:50.940 20:20:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:50.940 20:20:39 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.XQbYjhCcfZ /tmp/tmp.auH3gybfmH /tmp/tmp.GktrxbLTTF 00:20:50.940 00:20:50.940 real 1m27.137s 00:20:50.940 user 2m14.163s 00:20:50.940 sys 0m30.225s 00:20:50.940 20:20:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:50.940 20:20:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:50.940 ************************************ 00:20:50.940 END TEST nvmf_tls 00:20:50.940 ************************************ 00:20:51.200 20:20:40 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:51.200 20:20:40 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:20:51.200 20:20:40 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:51.200 20:20:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:51.200 ************************************ 00:20:51.200 START TEST nvmf_fips 00:20:51.200 ************************************ 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:51.200 * Looking for test storage... 00:20:51.200 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:20:51.200 20:20:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:20:51.201 20:20:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:20:51.201 20:20:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:20:51.460 20:20:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:20:51.460 20:20:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:20:51.460 20:20:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:20:51.460 20:20:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:20:51.460 20:20:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:20:51.460 20:20:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:20:51.460 20:20:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:20:51.460 20:20:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:20:51.460 20:20:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:51.460 20:20:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:20:51.460 20:20:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:51.460 20:20:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:20:51.460 20:20:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:51.460 20:20:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:20:51.460 20:20:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:20:51.460 20:20:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:20:51.460 Error setting digest 00:20:51.460 00C22310A87F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:20:51.460 00C22310A87F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:20:51.460 20:20:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:20:51.460 20:20:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:51.460 20:20:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:51.460 20:20:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:51.460 20:20:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:20:51.460 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:51.460 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:51.460 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:51.460 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:51.460 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:51.460 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:51.460 20:20:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:51.460 20:20:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:51.460 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:51.460 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:51.460 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:51.460 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:51.460 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:51.460 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:51.460 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:51.460 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:51.460 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:51.460 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:51.460 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:51.460 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:51.460 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:51.460 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:51.460 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:51.460 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:51.460 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:51.460 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:51.460 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:51.460 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:51.460 Cannot find device "nvmf_tgt_br" 00:20:51.460 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # true 00:20:51.460 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:51.460 Cannot find device "nvmf_tgt_br2" 00:20:51.460 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # true 00:20:51.460 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:51.460 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:51.460 Cannot find device "nvmf_tgt_br" 00:20:51.460 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # true 00:20:51.460 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:51.460 Cannot find device "nvmf_tgt_br2" 00:20:51.460 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # true 00:20:51.460 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:51.460 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:51.460 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:51.460 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:51.460 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # true 00:20:51.460 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:51.460 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:51.460 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # true 00:20:51.460 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:51.460 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:51.461 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:51.461 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:51.461 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:51.461 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:51.720 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:51.720 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:51.720 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:51.720 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:51.720 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:51.720 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:51.720 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:51.720 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:51.720 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:51.720 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:51.720 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:51.720 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:51.720 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:51.720 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:51.720 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:51.720 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:51.720 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:51.720 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:51.720 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:51.720 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:20:51.720 00:20:51.720 --- 10.0.0.2 ping statistics --- 00:20:51.720 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:51.720 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:20:51.720 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:51.720 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:51.720 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:20:51.720 00:20:51.720 --- 10.0.0.3 ping statistics --- 00:20:51.720 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:51.720 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:20:51.720 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:51.720 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:51.720 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:20:51.720 00:20:51.720 --- 10.0.0.1 ping statistics --- 00:20:51.720 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:51.720 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:20:51.720 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:51.720 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@433 -- # return 0 00:20:51.720 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:51.720 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:51.720 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:51.720 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:51.720 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:51.720 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:51.720 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:51.720 20:20:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:20:51.720 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:51.720 20:20:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:51.720 20:20:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:51.720 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=101191 00:20:51.720 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:51.720 20:20:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 101191 00:20:51.720 20:20:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 101191 ']' 00:20:51.720 20:20:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:51.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:51.720 20:20:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:51.720 20:20:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:51.720 20:20:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:51.720 20:20:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:51.979 [2024-07-14 20:20:40.885770] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:20:51.979 [2024-07-14 20:20:40.885892] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:51.979 [2024-07-14 20:20:41.026605] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:52.238 [2024-07-14 20:20:41.137414] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:52.238 [2024-07-14 20:20:41.137489] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:52.238 [2024-07-14 20:20:41.137503] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:52.238 [2024-07-14 20:20:41.137514] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:52.238 [2024-07-14 20:20:41.137524] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:52.238 [2024-07-14 20:20:41.137563] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:52.806 20:20:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:52.806 20:20:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:20:52.806 20:20:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:52.806 20:20:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:52.806 20:20:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:53.066 20:20:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:53.066 20:20:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:20:53.066 20:20:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:53.066 20:20:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:20:53.066 20:20:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:53.066 20:20:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:20:53.066 20:20:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:20:53.066 20:20:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:20:53.066 20:20:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:53.325 [2024-07-14 20:20:42.154301] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:53.325 [2024-07-14 20:20:42.170176] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:53.325 [2024-07-14 20:20:42.170548] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:53.325 [2024-07-14 20:20:42.204606] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:53.325 malloc0 00:20:53.325 20:20:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:53.325 20:20:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=101243 00:20:53.325 20:20:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:53.325 20:20:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 101243 /var/tmp/bdevperf.sock 00:20:53.325 20:20:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 101243 ']' 00:20:53.325 20:20:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:53.325 20:20:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:53.325 20:20:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:53.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:53.325 20:20:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:53.325 20:20:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:53.325 [2024-07-14 20:20:42.292589] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:20:53.325 [2024-07-14 20:20:42.292969] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101243 ] 00:20:53.584 [2024-07-14 20:20:42.431078] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:53.584 [2024-07-14 20:20:42.541305] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:54.151 20:20:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:54.151 20:20:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:20:54.151 20:20:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:20:54.408 [2024-07-14 20:20:43.447691] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:54.408 [2024-07-14 20:20:43.447832] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:54.666 TLSTESTn1 00:20:54.666 20:20:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:54.666 Running I/O for 10 seconds... 00:21:04.638 00:21:04.638 Latency(us) 00:21:04.638 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:04.638 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:04.638 Verification LBA range: start 0x0 length 0x2000 00:21:04.638 TLSTESTn1 : 10.02 4334.85 16.93 0.00 0.00 29472.77 6523.81 31218.97 00:21:04.638 =================================================================================================================== 00:21:04.638 Total : 4334.85 16.93 0.00 0.00 29472.77 6523.81 31218.97 00:21:04.638 0 00:21:04.638 20:20:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:21:04.638 20:20:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:21:04.638 20:20:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@804 -- # type=--id 00:21:04.638 20:20:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@805 -- # id=0 00:21:04.638 20:20:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:21:04.638 20:20:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:04.638 20:20:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:21:04.638 20:20:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:21:04.638 20:20:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@816 -- # for n in $shm_files 00:21:04.639 20:20:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:04.639 nvmf_trace.0 00:21:04.898 20:20:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # return 0 00:21:04.898 20:20:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 101243 00:21:04.898 20:20:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 101243 ']' 00:21:04.898 20:20:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 101243 00:21:04.898 20:20:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:21:04.898 20:20:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:04.898 20:20:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 101243 00:21:04.898 20:20:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:21:04.898 killing process with pid 101243 00:21:04.898 20:20:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:21:04.898 20:20:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 101243' 00:21:04.898 20:20:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 101243 00:21:04.898 Received shutdown signal, test time was about 10.000000 seconds 00:21:04.898 00:21:04.898 Latency(us) 00:21:04.898 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:04.898 =================================================================================================================== 00:21:04.898 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:04.898 [2024-07-14 20:20:53.805162] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:04.898 20:20:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 101243 00:21:05.157 20:20:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:21:05.157 20:20:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:05.157 20:20:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:21:05.157 20:20:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:05.157 20:20:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:21:05.157 20:20:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:05.157 20:20:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:05.157 rmmod nvme_tcp 00:21:05.157 rmmod nvme_fabrics 00:21:05.157 rmmod nvme_keyring 00:21:05.157 20:20:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:05.157 20:20:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:21:05.157 20:20:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:21:05.157 20:20:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 101191 ']' 00:21:05.157 20:20:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 101191 00:21:05.157 20:20:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 101191 ']' 00:21:05.157 20:20:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 101191 00:21:05.157 20:20:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:21:05.157 20:20:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:05.157 20:20:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 101191 00:21:05.157 20:20:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:21:05.157 20:20:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:21:05.157 killing process with pid 101191 00:21:05.157 20:20:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 101191' 00:21:05.157 20:20:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 101191 00:21:05.157 [2024-07-14 20:20:54.228840] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:05.157 20:20:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 101191 00:21:05.728 20:20:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:05.728 20:20:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:05.728 20:20:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:05.728 20:20:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:05.728 20:20:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:05.728 20:20:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:05.728 20:20:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:05.728 20:20:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:05.728 20:20:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:05.728 20:20:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:21:05.728 00:21:05.728 real 0m14.529s 00:21:05.728 user 0m19.235s 00:21:05.728 sys 0m6.110s 00:21:05.728 20:20:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:05.728 ************************************ 00:21:05.728 END TEST nvmf_fips 00:21:05.728 ************************************ 00:21:05.728 20:20:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:05.728 20:20:54 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 1 -eq 1 ']' 00:21:05.728 20:20:54 nvmf_tcp -- nvmf/nvmf.sh@66 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:21:05.728 20:20:54 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:21:05.728 20:20:54 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:05.728 20:20:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:05.728 ************************************ 00:21:05.729 START TEST nvmf_fuzz 00:21:05.729 ************************************ 00:21:05.729 20:20:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:21:05.729 * Looking for test storage... 00:21:05.729 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:05.729 20:20:54 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:05.729 20:20:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:21:05.729 20:20:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:05.729 20:20:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:05.729 20:20:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:05.729 20:20:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:05.729 20:20:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:05.729 20:20:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:05.729 20:20:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:05.729 20:20:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:05.729 20:20:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:05.729 20:20:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:05.729 20:20:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:21:05.729 20:20:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:21:05.729 20:20:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:05.729 20:20:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:05.729 20:20:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:05.729 20:20:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:05.729 20:20:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:05.729 20:20:54 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:05.729 20:20:54 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:05.729 20:20:54 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:05.729 20:20:54 nvmf_tcp.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.729 20:20:54 nvmf_tcp.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.730 20:20:54 nvmf_tcp.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.730 20:20:54 nvmf_tcp.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:21:05.730 20:20:54 nvmf_tcp.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.730 20:20:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:21:05.730 20:20:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:05.730 20:20:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:05.730 20:20:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:05.730 20:20:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:05.730 20:20:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:05.730 20:20:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:05.730 20:20:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:05.730 20:20:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:05.730 20:20:54 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:21:05.730 20:20:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:05.730 20:20:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:05.730 20:20:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:05.730 20:20:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:05.730 20:20:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:05.730 20:20:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:05.730 20:20:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:05.730 20:20:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:05.730 20:20:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:21:05.730 20:20:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:21:05.730 20:20:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:21:05.730 20:20:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:21:05.730 20:20:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:21:05.730 20:20:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@432 -- # nvmf_veth_init 00:21:05.730 20:20:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:05.730 20:20:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:05.730 20:20:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:05.730 20:20:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:05.730 20:20:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:05.730 20:20:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:05.730 20:20:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:05.730 20:20:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:05.730 20:20:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:05.730 20:20:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:05.730 20:20:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:05.730 20:20:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:05.730 20:20:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:05.730 20:20:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:05.730 Cannot find device "nvmf_tgt_br" 00:21:05.730 20:20:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@155 -- # true 00:21:05.730 20:20:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:05.730 Cannot find device "nvmf_tgt_br2" 00:21:05.730 20:20:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@156 -- # true 00:21:05.730 20:20:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:05.730 20:20:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:05.730 Cannot find device "nvmf_tgt_br" 00:21:05.730 20:20:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@158 -- # true 00:21:05.730 20:20:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:05.989 Cannot find device "nvmf_tgt_br2" 00:21:05.989 20:20:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@159 -- # true 00:21:05.989 20:20:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:05.989 20:20:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:05.990 20:20:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:05.990 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:05.990 20:20:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@162 -- # true 00:21:05.990 20:20:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:05.990 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:05.990 20:20:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@163 -- # true 00:21:05.990 20:20:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:05.990 20:20:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:05.990 20:20:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:05.990 20:20:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:05.990 20:20:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:05.990 20:20:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:05.990 20:20:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:05.990 20:20:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:05.990 20:20:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:05.990 20:20:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:05.990 20:20:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:05.990 20:20:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:05.990 20:20:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:05.990 20:20:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:05.990 20:20:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:05.990 20:20:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:05.990 20:20:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:05.990 20:20:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:05.990 20:20:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:05.990 20:20:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:05.990 20:20:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:05.990 20:20:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:06.247 20:20:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:06.247 20:20:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:06.247 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:06.247 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.104 ms 00:21:06.247 00:21:06.247 --- 10.0.0.2 ping statistics --- 00:21:06.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:06.247 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:21:06.247 20:20:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:06.247 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:06.247 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:21:06.247 00:21:06.247 --- 10.0.0.3 ping statistics --- 00:21:06.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:06.247 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:21:06.247 20:20:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:06.247 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:06.247 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:21:06.247 00:21:06.247 --- 10.0.0.1 ping statistics --- 00:21:06.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:06.247 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:21:06.247 20:20:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:06.247 20:20:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@433 -- # return 0 00:21:06.247 20:20:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:06.247 20:20:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:06.247 20:20:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:06.247 20:20:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:06.247 20:20:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:06.247 20:20:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:06.247 20:20:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:06.247 20:20:55 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=101585 00:21:06.247 20:20:55 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:21:06.247 20:20:55 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 101585 00:21:06.247 20:20:55 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@827 -- # '[' -z 101585 ']' 00:21:06.247 20:20:55 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:06.247 20:20:55 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:06.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:06.247 20:20:55 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:06.247 20:20:55 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:06.247 20:20:55 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:06.247 20:20:55 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:07.182 20:20:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:07.182 20:20:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@860 -- # return 0 00:21:07.182 20:20:56 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:07.182 20:20:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.182 20:20:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:07.182 20:20:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.182 20:20:56 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:21:07.182 20:20:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.182 20:20:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:07.182 Malloc0 00:21:07.182 20:20:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.182 20:20:56 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:07.182 20:20:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.182 20:20:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:07.182 20:20:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.182 20:20:56 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:07.183 20:20:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.183 20:20:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:07.183 20:20:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.183 20:20:56 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:07.183 20:20:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.183 20:20:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:07.183 20:20:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.183 20:20:56 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:21:07.183 20:20:56 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:21:07.748 Shutting down the fuzz application 00:21:07.749 20:20:56 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:21:08.008 Shutting down the fuzz application 00:21:08.008 20:20:56 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:08.008 20:20:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.008 20:20:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:08.008 20:20:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.008 20:20:57 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:21:08.008 20:20:57 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:21:08.008 20:20:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:08.008 20:20:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:21:08.008 20:20:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:08.008 20:20:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:21:08.008 20:20:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:08.008 20:20:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:08.008 rmmod nvme_tcp 00:21:08.008 rmmod nvme_fabrics 00:21:08.268 rmmod nvme_keyring 00:21:08.268 20:20:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:08.268 20:20:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:21:08.268 20:20:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:21:08.268 20:20:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 101585 ']' 00:21:08.268 20:20:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 101585 00:21:08.268 20:20:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@946 -- # '[' -z 101585 ']' 00:21:08.268 20:20:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@950 -- # kill -0 101585 00:21:08.268 20:20:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@951 -- # uname 00:21:08.268 20:20:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:08.268 20:20:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 101585 00:21:08.268 20:20:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:21:08.268 20:20:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:21:08.268 20:20:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@964 -- # echo 'killing process with pid 101585' 00:21:08.268 killing process with pid 101585 00:21:08.268 20:20:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@965 -- # kill 101585 00:21:08.268 20:20:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@970 -- # wait 101585 00:21:08.528 20:20:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:08.528 20:20:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:08.528 20:20:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:08.528 20:20:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:08.528 20:20:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:08.528 20:20:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:08.528 20:20:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:08.528 20:20:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:08.528 20:20:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:08.528 20:20:57 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:21:08.528 00:21:08.528 real 0m2.909s 00:21:08.528 user 0m2.991s 00:21:08.528 sys 0m0.748s 00:21:08.528 20:20:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:08.528 20:20:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:08.528 ************************************ 00:21:08.528 END TEST nvmf_fuzz 00:21:08.528 ************************************ 00:21:08.528 20:20:57 nvmf_tcp -- nvmf/nvmf.sh@67 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:21:08.528 20:20:57 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:21:08.528 20:20:57 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:08.528 20:20:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:08.528 ************************************ 00:21:08.528 START TEST nvmf_multiconnection 00:21:08.528 ************************************ 00:21:08.528 20:20:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:21:08.788 * Looking for test storage... 00:21:08.788 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:08.788 20:20:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:08.788 20:20:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:21:08.788 20:20:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:08.788 20:20:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:08.788 20:20:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:08.788 20:20:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:08.788 20:20:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:08.788 20:20:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:08.788 20:20:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:08.788 20:20:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:08.788 20:20:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:08.788 20:20:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:08.788 20:20:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:21:08.788 20:20:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:21:08.788 20:20:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:08.788 20:20:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:08.788 20:20:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:08.788 20:20:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:08.788 20:20:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:08.788 20:20:57 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:08.788 20:20:57 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:08.788 20:20:57 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:08.788 20:20:57 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.788 20:20:57 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.788 20:20:57 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.788 20:20:57 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:21:08.788 20:20:57 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.789 20:20:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:21:08.789 20:20:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:08.789 20:20:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:08.789 20:20:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:08.789 20:20:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:08.789 20:20:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:08.789 20:20:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:08.789 20:20:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:08.789 20:20:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:08.789 20:20:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:08.789 20:20:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:08.789 20:20:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:21:08.789 20:20:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:21:08.789 20:20:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:08.789 20:20:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:08.789 20:20:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:08.789 20:20:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:08.789 20:20:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:08.789 20:20:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:08.789 20:20:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:08.789 20:20:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:08.789 20:20:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:21:08.789 20:20:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:21:08.789 20:20:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:21:08.789 20:20:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:21:08.789 20:20:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:21:08.789 20:20:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@432 -- # nvmf_veth_init 00:21:08.789 20:20:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:08.789 20:20:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:08.789 20:20:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:08.789 20:20:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:08.789 20:20:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:08.789 20:20:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:08.789 20:20:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:08.789 20:20:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:08.789 20:20:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:08.789 20:20:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:08.789 20:20:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:08.789 20:20:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:08.789 20:20:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:08.789 20:20:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:08.789 Cannot find device "nvmf_tgt_br" 00:21:08.789 20:20:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@155 -- # true 00:21:08.789 20:20:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:08.789 Cannot find device "nvmf_tgt_br2" 00:21:08.789 20:20:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@156 -- # true 00:21:08.789 20:20:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:08.789 20:20:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:08.789 Cannot find device "nvmf_tgt_br" 00:21:08.789 20:20:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@158 -- # true 00:21:08.789 20:20:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:08.789 Cannot find device "nvmf_tgt_br2" 00:21:08.789 20:20:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@159 -- # true 00:21:08.789 20:20:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:08.789 20:20:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:08.789 20:20:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:08.789 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:08.789 20:20:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@162 -- # true 00:21:08.789 20:20:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:08.789 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:08.789 20:20:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@163 -- # true 00:21:08.789 20:20:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:08.789 20:20:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:08.789 20:20:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:08.789 20:20:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:08.789 20:20:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:09.048 20:20:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:09.048 20:20:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:09.048 20:20:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:09.048 20:20:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:09.048 20:20:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:09.048 20:20:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:09.048 20:20:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:09.048 20:20:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:09.048 20:20:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:09.048 20:20:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:09.048 20:20:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:09.048 20:20:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:09.048 20:20:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:09.048 20:20:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:09.048 20:20:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:09.048 20:20:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:09.048 20:20:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:09.048 20:20:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:09.048 20:20:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:09.048 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:09.048 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:21:09.048 00:21:09.048 --- 10.0.0.2 ping statistics --- 00:21:09.048 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:09.048 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:21:09.048 20:20:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:09.048 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:09.048 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:21:09.048 00:21:09.048 --- 10.0.0.3 ping statistics --- 00:21:09.048 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:09.048 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:21:09.048 20:20:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:09.048 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:09.048 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:21:09.048 00:21:09.048 --- 10.0.0.1 ping statistics --- 00:21:09.048 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:09.048 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:21:09.048 20:20:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:09.048 20:20:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@433 -- # return 0 00:21:09.048 20:20:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:09.048 20:20:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:09.048 20:20:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:09.048 20:20:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:09.049 20:20:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:09.049 20:20:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:09.049 20:20:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:09.049 20:20:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:21:09.049 20:20:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:09.049 20:20:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:09.049 20:20:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:09.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:09.049 20:20:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=101797 00:21:09.049 20:20:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:09.049 20:20:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 101797 00:21:09.049 20:20:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@827 -- # '[' -z 101797 ']' 00:21:09.049 20:20:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:09.049 20:20:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:09.049 20:20:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:09.049 20:20:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:09.049 20:20:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:09.049 [2024-07-14 20:20:58.128310] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:21:09.049 [2024-07-14 20:20:58.128446] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:09.307 [2024-07-14 20:20:58.273533] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:09.307 [2024-07-14 20:20:58.377593] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:09.307 [2024-07-14 20:20:58.377980] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:09.307 [2024-07-14 20:20:58.378000] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:09.307 [2024-07-14 20:20:58.378009] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:09.307 [2024-07-14 20:20:58.378017] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:09.307 [2024-07-14 20:20:58.378110] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:09.307 [2024-07-14 20:20:58.378303] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:09.307 [2024-07-14 20:20:58.378441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:09.307 [2024-07-14 20:20:58.378834] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:10.242 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:10.242 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@860 -- # return 0 00:21:10.242 20:20:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:10.242 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:10.242 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:10.242 20:20:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:10.242 20:20:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:10.242 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.242 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:10.242 [2024-07-14 20:20:59.196969] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:10.242 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.242 20:20:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:21:10.242 20:20:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:10.242 20:20:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:10.242 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.242 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:10.242 Malloc1 00:21:10.242 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.242 20:20:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:21:10.242 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.242 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:10.242 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.242 20:20:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:10.242 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.242 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:10.242 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.242 20:20:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:10.242 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.243 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:10.243 [2024-07-14 20:20:59.285521] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:10.243 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.243 20:20:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:10.243 20:20:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:21:10.243 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.243 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:10.243 Malloc2 00:21:10.501 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.502 20:20:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:21:10.502 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.502 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:10.502 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.502 20:20:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:21:10.502 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.502 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:10.502 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.502 20:20:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:10.502 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.502 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:10.502 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.502 20:20:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:10.502 20:20:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:21:10.502 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.502 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:10.502 Malloc3 00:21:10.502 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.502 20:20:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:21:10.502 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.502 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:10.502 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.502 20:20:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:21:10.502 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.502 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:10.502 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.502 20:20:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:21:10.502 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.502 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:10.502 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.502 20:20:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:10.502 20:20:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:21:10.502 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.502 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:10.502 Malloc4 00:21:10.502 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.502 20:20:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:21:10.502 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.502 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:10.502 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.502 20:20:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:21:10.502 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.502 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:10.502 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.502 20:20:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:21:10.502 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.502 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:10.502 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.502 20:20:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:10.502 20:20:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:21:10.502 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.502 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:10.502 Malloc5 00:21:10.502 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.502 20:20:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:21:10.502 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.502 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:10.502 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.502 20:20:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:21:10.502 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.502 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:10.502 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.502 20:20:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:21:10.502 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.502 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:10.502 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.502 20:20:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:10.502 20:20:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:21:10.502 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.502 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:10.502 Malloc6 00:21:10.502 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.502 20:20:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:21:10.502 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.502 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:10.762 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.762 20:20:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:21:10.762 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.762 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:10.762 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.762 20:20:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:21:10.762 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.762 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:10.762 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.762 20:20:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:10.762 20:20:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:21:10.762 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.762 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:10.762 Malloc7 00:21:10.762 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.762 20:20:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:21:10.762 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.762 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:10.762 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.762 20:20:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:21:10.762 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.762 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:10.762 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.762 20:20:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:21:10.762 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.762 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:10.762 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.762 20:20:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:10.762 20:20:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:21:10.762 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.762 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:10.762 Malloc8 00:21:10.762 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.762 20:20:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:21:10.762 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.762 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:10.762 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.762 20:20:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:21:10.762 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.762 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:10.762 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.762 20:20:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:21:10.762 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.762 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:10.762 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.762 20:20:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:10.763 20:20:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:21:10.763 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.763 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:10.763 Malloc9 00:21:10.763 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.763 20:20:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:21:10.763 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.763 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:10.763 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.763 20:20:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:21:10.763 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.763 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:10.763 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.763 20:20:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:21:10.763 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.763 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:10.763 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.763 20:20:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:10.763 20:20:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:21:10.763 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.763 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:11.022 Malloc10 00:21:11.022 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.022 20:20:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:21:11.022 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.022 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:11.022 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.022 20:20:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:21:11.022 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.022 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:11.022 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.022 20:20:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:21:11.022 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.022 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:11.022 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.022 20:20:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:11.022 20:20:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:21:11.022 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.022 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:11.022 Malloc11 00:21:11.022 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.022 20:20:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:21:11.022 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.022 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:11.022 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.022 20:20:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:21:11.022 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.022 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:11.022 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.022 20:20:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:21:11.022 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.022 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:11.022 20:20:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.022 20:20:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:21:11.022 20:20:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:11.022 20:20:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid=caa3dfc1-79db-49e7-95fe-b9f6785698c4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:21:11.281 20:21:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:21:11.281 20:21:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:21:11.281 20:21:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:21:11.281 20:21:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:21:11.281 20:21:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:21:13.181 20:21:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:21:13.181 20:21:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:21:13.181 20:21:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK1 00:21:13.181 20:21:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:21:13.181 20:21:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:21:13.181 20:21:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:21:13.181 20:21:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:13.181 20:21:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid=caa3dfc1-79db-49e7-95fe-b9f6785698c4 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:21:13.439 20:21:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:21:13.439 20:21:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:21:13.439 20:21:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:21:13.439 20:21:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:21:13.439 20:21:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:21:15.355 20:21:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:21:15.355 20:21:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:21:15.355 20:21:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK2 00:21:15.355 20:21:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:21:15.355 20:21:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:21:15.355 20:21:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:21:15.355 20:21:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:15.355 20:21:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid=caa3dfc1-79db-49e7-95fe-b9f6785698c4 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:21:15.614 20:21:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:21:15.614 20:21:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:21:15.614 20:21:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:21:15.614 20:21:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:21:15.614 20:21:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:21:17.512 20:21:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:21:17.512 20:21:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:21:17.512 20:21:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK3 00:21:17.512 20:21:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:21:17.512 20:21:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:21:17.512 20:21:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:21:17.512 20:21:06 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:17.512 20:21:06 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid=caa3dfc1-79db-49e7-95fe-b9f6785698c4 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:21:17.770 20:21:06 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:21:17.770 20:21:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:21:17.770 20:21:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:21:17.770 20:21:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:21:17.770 20:21:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:21:19.685 20:21:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:21:19.685 20:21:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:21:19.685 20:21:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK4 00:21:19.685 20:21:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:21:19.685 20:21:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:21:19.685 20:21:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:21:19.685 20:21:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:19.685 20:21:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid=caa3dfc1-79db-49e7-95fe-b9f6785698c4 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:21:19.943 20:21:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:21:19.943 20:21:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:21:19.943 20:21:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:21:19.943 20:21:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:21:19.943 20:21:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:21:21.844 20:21:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:21:21.845 20:21:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:21:21.845 20:21:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK5 00:21:21.845 20:21:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:21:21.845 20:21:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:21:21.845 20:21:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:21:21.845 20:21:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:21.845 20:21:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid=caa3dfc1-79db-49e7-95fe-b9f6785698c4 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:21:22.103 20:21:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:21:22.103 20:21:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:21:22.103 20:21:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:21:22.103 20:21:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:21:22.103 20:21:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:21:24.634 20:21:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:21:24.634 20:21:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:21:24.634 20:21:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK6 00:21:24.634 20:21:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:21:24.634 20:21:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:21:24.634 20:21:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:21:24.634 20:21:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:24.634 20:21:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid=caa3dfc1-79db-49e7-95fe-b9f6785698c4 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:21:24.634 20:21:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:21:24.634 20:21:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:21:24.634 20:21:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:21:24.634 20:21:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:21:24.634 20:21:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:21:26.535 20:21:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:21:26.535 20:21:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:21:26.535 20:21:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK7 00:21:26.535 20:21:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:21:26.535 20:21:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:21:26.535 20:21:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:21:26.535 20:21:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:26.535 20:21:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid=caa3dfc1-79db-49e7-95fe-b9f6785698c4 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:21:26.535 20:21:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:21:26.535 20:21:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:21:26.535 20:21:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:21:26.535 20:21:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:21:26.535 20:21:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:21:28.439 20:21:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:21:28.439 20:21:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:21:28.439 20:21:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK8 00:21:28.698 20:21:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:21:28.698 20:21:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:21:28.698 20:21:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:21:28.698 20:21:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:28.698 20:21:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid=caa3dfc1-79db-49e7-95fe-b9f6785698c4 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:21:28.698 20:21:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:21:28.698 20:21:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:21:28.698 20:21:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:21:28.698 20:21:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:21:28.698 20:21:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:21:31.231 20:21:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:21:31.231 20:21:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:21:31.231 20:21:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK9 00:21:31.231 20:21:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:21:31.231 20:21:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:21:31.231 20:21:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:21:31.231 20:21:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:31.231 20:21:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid=caa3dfc1-79db-49e7-95fe-b9f6785698c4 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:21:31.231 20:21:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:21:31.231 20:21:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:21:31.231 20:21:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:21:31.231 20:21:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:21:31.231 20:21:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:21:33.134 20:21:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:21:33.134 20:21:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:21:33.134 20:21:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK10 00:21:33.134 20:21:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:21:33.134 20:21:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:21:33.134 20:21:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:21:33.134 20:21:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:33.134 20:21:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid=caa3dfc1-79db-49e7-95fe-b9f6785698c4 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:21:33.134 20:21:22 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:21:33.134 20:21:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:21:33.134 20:21:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:21:33.134 20:21:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:21:33.134 20:21:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:21:35.130 20:21:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:21:35.130 20:21:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:21:35.130 20:21:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK11 00:21:35.130 20:21:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:21:35.130 20:21:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:21:35.130 20:21:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:21:35.130 20:21:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:21:35.130 [global] 00:21:35.130 thread=1 00:21:35.130 invalidate=1 00:21:35.130 rw=read 00:21:35.130 time_based=1 00:21:35.130 runtime=10 00:21:35.130 ioengine=libaio 00:21:35.130 direct=1 00:21:35.130 bs=262144 00:21:35.130 iodepth=64 00:21:35.130 norandommap=1 00:21:35.130 numjobs=1 00:21:35.130 00:21:35.130 [job0] 00:21:35.130 filename=/dev/nvme0n1 00:21:35.130 [job1] 00:21:35.130 filename=/dev/nvme10n1 00:21:35.130 [job2] 00:21:35.130 filename=/dev/nvme1n1 00:21:35.130 [job3] 00:21:35.130 filename=/dev/nvme2n1 00:21:35.130 [job4] 00:21:35.130 filename=/dev/nvme3n1 00:21:35.130 [job5] 00:21:35.130 filename=/dev/nvme4n1 00:21:35.130 [job6] 00:21:35.130 filename=/dev/nvme5n1 00:21:35.130 [job7] 00:21:35.130 filename=/dev/nvme6n1 00:21:35.130 [job8] 00:21:35.130 filename=/dev/nvme7n1 00:21:35.130 [job9] 00:21:35.130 filename=/dev/nvme8n1 00:21:35.130 [job10] 00:21:35.130 filename=/dev/nvme9n1 00:21:35.388 Could not set queue depth (nvme0n1) 00:21:35.388 Could not set queue depth (nvme10n1) 00:21:35.388 Could not set queue depth (nvme1n1) 00:21:35.388 Could not set queue depth (nvme2n1) 00:21:35.388 Could not set queue depth (nvme3n1) 00:21:35.388 Could not set queue depth (nvme4n1) 00:21:35.388 Could not set queue depth (nvme5n1) 00:21:35.388 Could not set queue depth (nvme6n1) 00:21:35.388 Could not set queue depth (nvme7n1) 00:21:35.388 Could not set queue depth (nvme8n1) 00:21:35.388 Could not set queue depth (nvme9n1) 00:21:35.388 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:35.388 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:35.388 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:35.388 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:35.388 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:35.388 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:35.388 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:35.388 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:35.388 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:35.388 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:35.388 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:35.388 fio-3.35 00:21:35.388 Starting 11 threads 00:21:47.590 00:21:47.590 job0: (groupid=0, jobs=1): err= 0: pid=102266: Sun Jul 14 20:21:34 2024 00:21:47.590 read: IOPS=462, BW=116MiB/s (121MB/s)(1171MiB/10119msec) 00:21:47.590 slat (usec): min=22, max=121618, avg=2105.94, stdev=6760.05 00:21:47.590 clat (msec): min=10, max=274, avg=135.92, stdev=19.85 00:21:47.590 lat (msec): min=10, max=274, avg=138.03, stdev=20.62 00:21:47.590 clat percentiles (msec): 00:21:47.590 | 1.00th=[ 101], 5.00th=[ 112], 10.00th=[ 118], 20.00th=[ 125], 00:21:47.590 | 30.00th=[ 129], 40.00th=[ 132], 50.00th=[ 136], 60.00th=[ 140], 00:21:47.590 | 70.00th=[ 142], 80.00th=[ 146], 90.00th=[ 157], 95.00th=[ 163], 00:21:47.590 | 99.00th=[ 184], 99.50th=[ 224], 99.90th=[ 275], 99.95th=[ 275], 00:21:47.590 | 99.99th=[ 275] 00:21:47.590 bw ( KiB/s): min=97280, max=130048, per=8.05%, avg=118212.35, stdev=8901.73, samples=20 00:21:47.590 iops : min= 380, max= 508, avg=461.75, stdev=34.78, samples=20 00:21:47.590 lat (msec) : 20=0.30%, 50=0.34%, 100=0.34%, 250=98.57%, 500=0.45% 00:21:47.590 cpu : usr=0.16%, sys=1.70%, ctx=969, majf=0, minf=4097 00:21:47.590 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:21:47.590 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:47.590 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:47.590 issued rwts: total=4682,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:47.590 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:47.590 job1: (groupid=0, jobs=1): err= 0: pid=102267: Sun Jul 14 20:21:34 2024 00:21:47.590 read: IOPS=644, BW=161MiB/s (169MB/s)(1622MiB/10073msec) 00:21:47.590 slat (usec): min=23, max=90321, avg=1507.41, stdev=5318.59 00:21:47.590 clat (msec): min=21, max=276, avg=97.66, stdev=32.78 00:21:47.590 lat (msec): min=23, max=276, avg=99.16, stdev=33.48 00:21:47.590 clat percentiles (msec): 00:21:47.590 | 1.00th=[ 62], 5.00th=[ 70], 10.00th=[ 74], 20.00th=[ 80], 00:21:47.590 | 30.00th=[ 84], 40.00th=[ 86], 50.00th=[ 89], 60.00th=[ 92], 00:21:47.590 | 70.00th=[ 96], 80.00th=[ 101], 90.00th=[ 155], 95.00th=[ 186], 00:21:47.590 | 99.00th=[ 211], 99.50th=[ 215], 99.90th=[ 236], 99.95th=[ 236], 00:21:47.590 | 99.99th=[ 275] 00:21:47.590 bw ( KiB/s): min=72336, max=201728, per=11.20%, avg=164433.60, stdev=41455.87, samples=20 00:21:47.590 iops : min= 282, max= 788, avg=642.20, stdev=161.97, samples=20 00:21:47.590 lat (msec) : 50=0.28%, 100=78.93%, 250=20.78%, 500=0.02% 00:21:47.590 cpu : usr=0.36%, sys=2.56%, ctx=1638, majf=0, minf=4097 00:21:47.590 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:21:47.590 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:47.590 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:47.590 issued rwts: total=6488,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:47.590 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:47.590 job2: (groupid=0, jobs=1): err= 0: pid=102268: Sun Jul 14 20:21:34 2024 00:21:47.590 read: IOPS=464, BW=116MiB/s (122MB/s)(1172MiB/10087msec) 00:21:47.590 slat (usec): min=22, max=98994, avg=2131.53, stdev=7175.86 00:21:47.590 clat (msec): min=68, max=236, avg=135.34, stdev=23.68 00:21:47.590 lat (msec): min=85, max=281, avg=137.47, stdev=24.65 00:21:47.590 clat percentiles (msec): 00:21:47.590 | 1.00th=[ 94], 5.00th=[ 107], 10.00th=[ 112], 20.00th=[ 117], 00:21:47.590 | 30.00th=[ 123], 40.00th=[ 127], 50.00th=[ 130], 60.00th=[ 136], 00:21:47.590 | 70.00th=[ 142], 80.00th=[ 148], 90.00th=[ 176], 95.00th=[ 184], 00:21:47.590 | 99.00th=[ 201], 99.50th=[ 207], 99.90th=[ 236], 99.95th=[ 236], 00:21:47.590 | 99.99th=[ 239] 00:21:47.590 bw ( KiB/s): min=74752, max=139264, per=8.06%, avg=118326.55, stdev=18575.70, samples=20 00:21:47.590 iops : min= 292, max= 544, avg=462.15, stdev=72.58, samples=20 00:21:47.590 lat (msec) : 100=1.94%, 250=98.06% 00:21:47.590 cpu : usr=0.17%, sys=1.82%, ctx=1035, majf=0, minf=4097 00:21:47.590 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:21:47.590 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:47.590 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:47.590 issued rwts: total=4688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:47.590 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:47.590 job3: (groupid=0, jobs=1): err= 0: pid=102269: Sun Jul 14 20:21:34 2024 00:21:47.590 read: IOPS=446, BW=112MiB/s (117MB/s)(1128MiB/10093msec) 00:21:47.590 slat (usec): min=22, max=110042, avg=2167.57, stdev=7646.55 00:21:47.590 clat (msec): min=60, max=257, avg=140.85, stdev=24.89 00:21:47.590 lat (msec): min=61, max=317, avg=143.02, stdev=26.02 00:21:47.590 clat percentiles (msec): 00:21:47.590 | 1.00th=[ 83], 5.00th=[ 111], 10.00th=[ 117], 20.00th=[ 124], 00:21:47.590 | 30.00th=[ 127], 40.00th=[ 132], 50.00th=[ 136], 60.00th=[ 142], 00:21:47.590 | 70.00th=[ 148], 80.00th=[ 159], 90.00th=[ 176], 95.00th=[ 194], 00:21:47.590 | 99.00th=[ 211], 99.50th=[ 224], 99.90th=[ 257], 99.95th=[ 257], 00:21:47.590 | 99.99th=[ 257] 00:21:47.590 bw ( KiB/s): min=79360, max=130560, per=7.76%, avg=113837.35, stdev=14789.21, samples=20 00:21:47.590 iops : min= 310, max= 510, avg=444.60, stdev=57.77, samples=20 00:21:47.590 lat (msec) : 100=1.68%, 250=97.98%, 500=0.33% 00:21:47.590 cpu : usr=0.20%, sys=1.65%, ctx=840, majf=0, minf=4097 00:21:47.590 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:21:47.590 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:47.590 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:47.590 issued rwts: total=4511,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:47.590 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:47.590 job4: (groupid=0, jobs=1): err= 0: pid=102270: Sun Jul 14 20:21:34 2024 00:21:47.590 read: IOPS=476, BW=119MiB/s (125MB/s)(1205MiB/10111msec) 00:21:47.590 slat (usec): min=17, max=80606, avg=2019.93, stdev=6897.39 00:21:47.590 clat (msec): min=88, max=279, avg=132.04, stdev=18.18 00:21:47.590 lat (msec): min=88, max=303, avg=134.06, stdev=19.12 00:21:47.590 clat percentiles (msec): 00:21:47.590 | 1.00th=[ 103], 5.00th=[ 110], 10.00th=[ 113], 20.00th=[ 120], 00:21:47.591 | 30.00th=[ 125], 40.00th=[ 127], 50.00th=[ 130], 60.00th=[ 134], 00:21:47.591 | 70.00th=[ 136], 80.00th=[ 142], 90.00th=[ 150], 95.00th=[ 163], 00:21:47.591 | 99.00th=[ 205], 99.50th=[ 218], 99.90th=[ 279], 99.95th=[ 279], 00:21:47.591 | 99.99th=[ 279] 00:21:47.591 bw ( KiB/s): min=94720, max=135680, per=8.29%, avg=121716.80, stdev=10875.92, samples=20 00:21:47.591 iops : min= 370, max= 530, avg=475.40, stdev=42.49, samples=20 00:21:47.591 lat (msec) : 100=0.81%, 250=98.92%, 500=0.27% 00:21:47.591 cpu : usr=0.22%, sys=1.62%, ctx=959, majf=0, minf=4097 00:21:47.591 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:21:47.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:47.591 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:47.591 issued rwts: total=4819,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:47.591 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:47.591 job5: (groupid=0, jobs=1): err= 0: pid=102272: Sun Jul 14 20:21:34 2024 00:21:47.591 read: IOPS=475, BW=119MiB/s (125MB/s)(1200MiB/10097msec) 00:21:47.591 slat (usec): min=21, max=69538, avg=2083.98, stdev=6889.99 00:21:47.591 clat (msec): min=33, max=265, avg=132.35, stdev=19.57 00:21:47.591 lat (msec): min=33, max=265, avg=134.43, stdev=20.53 00:21:47.591 clat percentiles (msec): 00:21:47.591 | 1.00th=[ 55], 5.00th=[ 108], 10.00th=[ 114], 20.00th=[ 122], 00:21:47.591 | 30.00th=[ 125], 40.00th=[ 128], 50.00th=[ 132], 60.00th=[ 136], 00:21:47.591 | 70.00th=[ 140], 80.00th=[ 144], 90.00th=[ 155], 95.00th=[ 163], 00:21:47.591 | 99.00th=[ 182], 99.50th=[ 207], 99.90th=[ 236], 99.95th=[ 236], 00:21:47.591 | 99.99th=[ 266] 00:21:47.591 bw ( KiB/s): min=91648, max=141312, per=8.26%, avg=121242.15, stdev=11063.59, samples=20 00:21:47.591 iops : min= 358, max= 552, avg=473.55, stdev=43.18, samples=20 00:21:47.591 lat (msec) : 50=0.98%, 100=1.02%, 250=97.96%, 500=0.04% 00:21:47.591 cpu : usr=0.18%, sys=1.80%, ctx=949, majf=0, minf=4097 00:21:47.591 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:21:47.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:47.591 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:47.591 issued rwts: total=4801,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:47.591 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:47.591 job6: (groupid=0, jobs=1): err= 0: pid=102273: Sun Jul 14 20:21:34 2024 00:21:47.591 read: IOPS=490, BW=123MiB/s (129MB/s)(1234MiB/10063msec) 00:21:47.591 slat (usec): min=26, max=63295, avg=2019.64, stdev=6410.99 00:21:47.591 clat (msec): min=28, max=187, avg=128.15, stdev=24.57 00:21:47.591 lat (msec): min=28, max=223, avg=130.17, stdev=25.39 00:21:47.591 clat percentiles (msec): 00:21:47.591 | 1.00th=[ 59], 5.00th=[ 74], 10.00th=[ 85], 20.00th=[ 117], 00:21:47.591 | 30.00th=[ 126], 40.00th=[ 130], 50.00th=[ 133], 60.00th=[ 138], 00:21:47.591 | 70.00th=[ 142], 80.00th=[ 146], 90.00th=[ 150], 95.00th=[ 157], 00:21:47.591 | 99.00th=[ 174], 99.50th=[ 184], 99.90th=[ 188], 99.95th=[ 188], 00:21:47.591 | 99.99th=[ 188] 00:21:47.591 bw ( KiB/s): min=105984, max=198144, per=8.50%, avg=124687.55, stdev=22430.59, samples=20 00:21:47.591 iops : min= 414, max= 774, avg=487.05, stdev=87.62, samples=20 00:21:47.591 lat (msec) : 50=0.41%, 100=12.81%, 250=86.79% 00:21:47.591 cpu : usr=0.21%, sys=2.19%, ctx=1236, majf=0, minf=4097 00:21:47.591 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:21:47.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:47.591 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:47.591 issued rwts: total=4935,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:47.591 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:47.591 job7: (groupid=0, jobs=1): err= 0: pid=102277: Sun Jul 14 20:21:34 2024 00:21:47.591 read: IOPS=471, BW=118MiB/s (124MB/s)(1193MiB/10114msec) 00:21:47.591 slat (usec): min=21, max=69754, avg=2091.35, stdev=6894.85 00:21:47.591 clat (msec): min=16, max=285, avg=133.34, stdev=20.91 00:21:47.591 lat (msec): min=16, max=285, avg=135.43, stdev=21.91 00:21:47.591 clat percentiles (msec): 00:21:47.591 | 1.00th=[ 57], 5.00th=[ 109], 10.00th=[ 116], 20.00th=[ 123], 00:21:47.591 | 30.00th=[ 126], 40.00th=[ 130], 50.00th=[ 133], 60.00th=[ 138], 00:21:47.591 | 70.00th=[ 140], 80.00th=[ 144], 90.00th=[ 155], 95.00th=[ 163], 00:21:47.591 | 99.00th=[ 182], 99.50th=[ 224], 99.90th=[ 288], 99.95th=[ 288], 00:21:47.591 | 99.99th=[ 288] 00:21:47.591 bw ( KiB/s): min=91648, max=138752, per=8.21%, avg=120438.55, stdev=9574.82, samples=20 00:21:47.591 iops : min= 358, max= 542, avg=470.45, stdev=37.40, samples=20 00:21:47.591 lat (msec) : 20=0.08%, 50=0.71%, 100=1.78%, 250=97.00%, 500=0.42% 00:21:47.591 cpu : usr=0.20%, sys=1.83%, ctx=1040, majf=0, minf=4097 00:21:47.591 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:21:47.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:47.591 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:47.591 issued rwts: total=4770,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:47.591 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:47.591 job8: (groupid=0, jobs=1): err= 0: pid=102278: Sun Jul 14 20:21:34 2024 00:21:47.591 read: IOPS=442, BW=111MiB/s (116MB/s)(1119MiB/10100msec) 00:21:47.591 slat (usec): min=21, max=113803, avg=2231.31, stdev=7498.40 00:21:47.591 clat (msec): min=29, max=277, avg=141.88, stdev=23.34 00:21:47.591 lat (msec): min=30, max=287, avg=144.11, stdev=24.42 00:21:47.591 clat percentiles (msec): 00:21:47.591 | 1.00th=[ 100], 5.00th=[ 113], 10.00th=[ 118], 20.00th=[ 124], 00:21:47.591 | 30.00th=[ 128], 40.00th=[ 134], 50.00th=[ 140], 60.00th=[ 144], 00:21:47.591 | 70.00th=[ 148], 80.00th=[ 159], 90.00th=[ 180], 95.00th=[ 188], 00:21:47.591 | 99.00th=[ 203], 99.50th=[ 209], 99.90th=[ 220], 99.95th=[ 222], 00:21:47.591 | 99.99th=[ 279] 00:21:47.591 bw ( KiB/s): min=79006, max=135168, per=7.69%, avg=112893.90, stdev=16156.23, samples=20 00:21:47.591 iops : min= 308, max= 528, avg=440.95, stdev=63.18, samples=20 00:21:47.591 lat (msec) : 50=0.09%, 100=0.96%, 250=98.93%, 500=0.02% 00:21:47.591 cpu : usr=0.19%, sys=1.72%, ctx=969, majf=0, minf=4097 00:21:47.591 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:21:47.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:47.591 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:47.591 issued rwts: total=4474,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:47.591 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:47.591 job9: (groupid=0, jobs=1): err= 0: pid=102279: Sun Jul 14 20:21:34 2024 00:21:47.591 read: IOPS=479, BW=120MiB/s (126MB/s)(1206MiB/10055msec) 00:21:47.591 slat (usec): min=22, max=76859, avg=2067.24, stdev=6705.47 00:21:47.591 clat (msec): min=54, max=196, avg=131.19, stdev=23.40 00:21:47.591 lat (msec): min=54, max=217, avg=133.25, stdev=24.40 00:21:47.591 clat percentiles (msec): 00:21:47.591 | 1.00th=[ 65], 5.00th=[ 80], 10.00th=[ 89], 20.00th=[ 121], 00:21:47.591 | 30.00th=[ 127], 40.00th=[ 132], 50.00th=[ 136], 60.00th=[ 140], 00:21:47.591 | 70.00th=[ 144], 80.00th=[ 148], 90.00th=[ 155], 95.00th=[ 159], 00:21:47.591 | 99.00th=[ 176], 99.50th=[ 180], 99.90th=[ 192], 99.95th=[ 192], 00:21:47.591 | 99.99th=[ 197] 00:21:47.591 bw ( KiB/s): min=99328, max=198028, per=8.30%, avg=121779.95, stdev=21872.04, samples=20 00:21:47.591 iops : min= 388, max= 773, avg=475.65, stdev=85.35, samples=20 00:21:47.591 lat (msec) : 100=12.44%, 250=87.56% 00:21:47.591 cpu : usr=0.32%, sys=1.58%, ctx=998, majf=0, minf=4097 00:21:47.591 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:21:47.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:47.591 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:47.591 issued rwts: total=4822,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:47.591 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:47.591 job10: (groupid=0, jobs=1): err= 0: pid=102280: Sun Jul 14 20:21:34 2024 00:21:47.591 read: IOPS=896, BW=224MiB/s (235MB/s)(2256MiB/10066msec) 00:21:47.591 slat (usec): min=19, max=49138, avg=1100.05, stdev=4044.82 00:21:47.591 clat (msec): min=6, max=127, avg=70.15, stdev=22.15 00:21:47.591 lat (msec): min=6, max=135, avg=71.25, stdev=22.60 00:21:47.591 clat percentiles (msec): 00:21:47.591 | 1.00th=[ 24], 5.00th=[ 34], 10.00th=[ 39], 20.00th=[ 45], 00:21:47.591 | 30.00th=[ 55], 40.00th=[ 70], 50.00th=[ 77], 60.00th=[ 82], 00:21:47.591 | 70.00th=[ 86], 80.00th=[ 90], 90.00th=[ 95], 95.00th=[ 100], 00:21:47.591 | 99.00th=[ 109], 99.50th=[ 115], 99.90th=[ 117], 99.95th=[ 124], 00:21:47.591 | 99.99th=[ 128] 00:21:47.591 bw ( KiB/s): min=179200, max=387584, per=15.63%, avg=229333.60, stdev=74579.44, samples=20 00:21:47.591 iops : min= 700, max= 1514, avg=895.70, stdev=291.27, samples=20 00:21:47.591 lat (msec) : 10=0.10%, 20=0.80%, 50=25.10%, 100=70.21%, 250=3.79% 00:21:47.591 cpu : usr=0.31%, sys=3.04%, ctx=1657, majf=0, minf=4097 00:21:47.591 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:21:47.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:47.591 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:47.591 issued rwts: total=9024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:47.591 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:47.591 00:21:47.591 Run status group 0 (all jobs): 00:21:47.591 READ: bw=1433MiB/s (1503MB/s), 111MiB/s-224MiB/s (116MB/s-235MB/s), io=14.2GiB (15.2GB), run=10055-10119msec 00:21:47.591 00:21:47.591 Disk stats (read/write): 00:21:47.591 nvme0n1: ios=9280/0, merge=0/0, ticks=1241062/0, in_queue=1241062, util=97.48% 00:21:47.591 nvme10n1: ios=12884/0, merge=0/0, ticks=1240878/0, in_queue=1240878, util=97.54% 00:21:47.591 nvme1n1: ios=9254/0, merge=0/0, ticks=1241334/0, in_queue=1241334, util=97.88% 00:21:47.591 nvme2n1: ios=8916/0, merge=0/0, ticks=1241296/0, in_queue=1241296, util=97.88% 00:21:47.591 nvme3n1: ios=9528/0, merge=0/0, ticks=1236617/0, in_queue=1236617, util=97.74% 00:21:47.591 nvme4n1: ios=9475/0, merge=0/0, ticks=1235529/0, in_queue=1235529, util=97.99% 00:21:47.591 nvme5n1: ios=9801/0, merge=0/0, ticks=1243020/0, in_queue=1243020, util=98.34% 00:21:47.591 nvme6n1: ios=9442/0, merge=0/0, ticks=1239205/0, in_queue=1239205, util=98.51% 00:21:47.591 nvme7n1: ios=8857/0, merge=0/0, ticks=1243206/0, in_queue=1243206, util=98.63% 00:21:47.591 nvme8n1: ios=9523/0, merge=0/0, ticks=1243480/0, in_queue=1243480, util=98.30% 00:21:47.591 nvme9n1: ios=17920/0, merge=0/0, ticks=1235810/0, in_queue=1235810, util=98.65% 00:21:47.591 20:21:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:21:47.591 [global] 00:21:47.591 thread=1 00:21:47.591 invalidate=1 00:21:47.591 rw=randwrite 00:21:47.591 time_based=1 00:21:47.591 runtime=10 00:21:47.591 ioengine=libaio 00:21:47.591 direct=1 00:21:47.591 bs=262144 00:21:47.591 iodepth=64 00:21:47.591 norandommap=1 00:21:47.591 numjobs=1 00:21:47.591 00:21:47.591 [job0] 00:21:47.592 filename=/dev/nvme0n1 00:21:47.592 [job1] 00:21:47.592 filename=/dev/nvme10n1 00:21:47.592 [job2] 00:21:47.592 filename=/dev/nvme1n1 00:21:47.592 [job3] 00:21:47.592 filename=/dev/nvme2n1 00:21:47.592 [job4] 00:21:47.592 filename=/dev/nvme3n1 00:21:47.592 [job5] 00:21:47.592 filename=/dev/nvme4n1 00:21:47.592 [job6] 00:21:47.592 filename=/dev/nvme5n1 00:21:47.592 [job7] 00:21:47.592 filename=/dev/nvme6n1 00:21:47.592 [job8] 00:21:47.592 filename=/dev/nvme7n1 00:21:47.592 [job9] 00:21:47.592 filename=/dev/nvme8n1 00:21:47.592 [job10] 00:21:47.592 filename=/dev/nvme9n1 00:21:47.592 Could not set queue depth (nvme0n1) 00:21:47.592 Could not set queue depth (nvme10n1) 00:21:47.592 Could not set queue depth (nvme1n1) 00:21:47.592 Could not set queue depth (nvme2n1) 00:21:47.592 Could not set queue depth (nvme3n1) 00:21:47.592 Could not set queue depth (nvme4n1) 00:21:47.592 Could not set queue depth (nvme5n1) 00:21:47.592 Could not set queue depth (nvme6n1) 00:21:47.592 Could not set queue depth (nvme7n1) 00:21:47.592 Could not set queue depth (nvme8n1) 00:21:47.592 Could not set queue depth (nvme9n1) 00:21:47.592 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:47.592 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:47.592 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:47.592 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:47.592 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:47.592 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:47.592 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:47.592 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:47.592 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:47.592 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:47.592 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:47.592 fio-3.35 00:21:47.592 Starting 11 threads 00:21:57.556 00:21:57.556 job0: (groupid=0, jobs=1): err= 0: pid=102480: Sun Jul 14 20:21:45 2024 00:21:57.556 write: IOPS=245, BW=61.5MiB/s (64.5MB/s)(628MiB/10221msec); 0 zone resets 00:21:57.556 slat (usec): min=29, max=73766, avg=3976.03, stdev=7673.98 00:21:57.556 clat (msec): min=11, max=493, avg=256.03, stdev=38.00 00:21:57.556 lat (msec): min=11, max=493, avg=260.01, stdev=37.63 00:21:57.556 clat percentiles (msec): 00:21:57.556 | 1.00th=[ 116], 5.00th=[ 192], 10.00th=[ 213], 20.00th=[ 239], 00:21:57.556 | 30.00th=[ 251], 40.00th=[ 259], 50.00th=[ 264], 60.00th=[ 268], 00:21:57.556 | 70.00th=[ 271], 80.00th=[ 275], 90.00th=[ 279], 95.00th=[ 284], 00:21:57.556 | 99.00th=[ 388], 99.50th=[ 443], 99.90th=[ 477], 99.95th=[ 493], 00:21:57.556 | 99.99th=[ 493] 00:21:57.556 bw ( KiB/s): min=59392, max=77312, per=5.67%, avg=62720.00, stdev=4418.46, samples=20 00:21:57.556 iops : min= 232, max= 302, avg=245.00, stdev=17.26, samples=20 00:21:57.556 lat (msec) : 20=0.12%, 100=0.80%, 250=27.70%, 500=71.39% 00:21:57.556 cpu : usr=1.00%, sys=0.77%, ctx=3043, majf=0, minf=1 00:21:57.556 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.5% 00:21:57.556 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:57.556 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:57.556 issued rwts: total=0,2513,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:57.556 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:57.556 job1: (groupid=0, jobs=1): err= 0: pid=102481: Sun Jul 14 20:21:45 2024 00:21:57.556 write: IOPS=256, BW=64.0MiB/s (67.1MB/s)(654MiB/10219msec); 0 zone resets 00:21:57.556 slat (usec): min=29, max=54194, avg=3820.98, stdev=7075.30 00:21:57.556 clat (msec): min=15, max=469, avg=245.87, stdev=35.47 00:21:57.556 lat (msec): min=18, max=469, avg=249.69, stdev=35.20 00:21:57.556 clat percentiles (msec): 00:21:57.556 | 1.00th=[ 87], 5.00th=[ 201], 10.00th=[ 211], 20.00th=[ 228], 00:21:57.556 | 30.00th=[ 236], 40.00th=[ 245], 50.00th=[ 251], 60.00th=[ 257], 00:21:57.556 | 70.00th=[ 262], 80.00th=[ 266], 90.00th=[ 271], 95.00th=[ 279], 00:21:57.556 | 99.00th=[ 351], 99.50th=[ 418], 99.90th=[ 451], 99.95th=[ 468], 00:21:57.556 | 99.99th=[ 468] 00:21:57.556 bw ( KiB/s): min=59392, max=74240, per=5.91%, avg=65350.25, stdev=4332.43, samples=20 00:21:57.556 iops : min= 232, max= 290, avg=255.25, stdev=16.92, samples=20 00:21:57.556 lat (msec) : 20=0.04%, 50=0.46%, 100=0.61%, 250=47.12%, 500=51.78% 00:21:57.556 cpu : usr=1.10%, sys=0.82%, ctx=3145, majf=0, minf=1 00:21:57.556 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:21:57.556 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:57.556 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:57.556 issued rwts: total=0,2617,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:57.556 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:57.556 job2: (groupid=0, jobs=1): err= 0: pid=102493: Sun Jul 14 20:21:45 2024 00:21:57.556 write: IOPS=463, BW=116MiB/s (122MB/s)(1173MiB/10114msec); 0 zone resets 00:21:57.556 slat (usec): min=27, max=29171, avg=2095.38, stdev=3628.98 00:21:57.556 clat (msec): min=5, max=251, avg=135.86, stdev=18.56 00:21:57.556 lat (msec): min=5, max=251, avg=137.96, stdev=18.49 00:21:57.556 clat percentiles (msec): 00:21:57.556 | 1.00th=[ 39], 5.00th=[ 126], 10.00th=[ 128], 20.00th=[ 131], 00:21:57.556 | 30.00th=[ 134], 40.00th=[ 136], 50.00th=[ 136], 60.00th=[ 138], 00:21:57.556 | 70.00th=[ 140], 80.00th=[ 142], 90.00th=[ 144], 95.00th=[ 155], 00:21:57.556 | 99.00th=[ 192], 99.50th=[ 199], 99.90th=[ 243], 99.95th=[ 243], 00:21:57.556 | 99.99th=[ 251] 00:21:57.556 bw ( KiB/s): min=109056, max=122880, per=10.71%, avg=118439.20, stdev=3696.33, samples=20 00:21:57.556 iops : min= 426, max= 480, avg=462.65, stdev=14.44, samples=20 00:21:57.556 lat (msec) : 10=0.13%, 20=0.28%, 50=1.04%, 100=0.53%, 250=97.97% 00:21:57.556 lat (msec) : 500=0.04% 00:21:57.556 cpu : usr=1.71%, sys=1.46%, ctx=5583, majf=0, minf=1 00:21:57.556 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:21:57.556 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:57.556 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:57.556 issued rwts: total=0,4690,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:57.556 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:57.556 job3: (groupid=0, jobs=1): err= 0: pid=102494: Sun Jul 14 20:21:45 2024 00:21:57.556 write: IOPS=254, BW=63.7MiB/s (66.8MB/s)(651MiB/10226msec); 0 zone resets 00:21:57.556 slat (usec): min=24, max=53939, avg=3776.05, stdev=7318.22 00:21:57.556 clat (msec): min=8, max=511, avg=247.36, stdev=46.78 00:21:57.556 lat (msec): min=8, max=511, avg=251.14, stdev=47.00 00:21:57.556 clat percentiles (msec): 00:21:57.556 | 1.00th=[ 64], 5.00th=[ 165], 10.00th=[ 188], 20.00th=[ 226], 00:21:57.556 | 30.00th=[ 243], 40.00th=[ 251], 50.00th=[ 259], 60.00th=[ 264], 00:21:57.556 | 70.00th=[ 268], 80.00th=[ 275], 90.00th=[ 284], 95.00th=[ 292], 00:21:57.556 | 99.00th=[ 355], 99.50th=[ 422], 99.90th=[ 472], 99.95th=[ 510], 00:21:57.556 | 99.99th=[ 510] 00:21:57.556 bw ( KiB/s): min=55296, max=91648, per=5.88%, avg=65024.00, stdev=9090.89, samples=20 00:21:57.556 iops : min= 216, max= 358, avg=254.00, stdev=35.51, samples=20 00:21:57.556 lat (msec) : 10=0.08%, 20=0.19%, 50=0.46%, 100=1.23%, 250=36.29% 00:21:57.556 lat (msec) : 500=61.67%, 750=0.08% 00:21:57.556 cpu : usr=0.69%, sys=0.72%, ctx=2606, majf=0, minf=1 00:21:57.557 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:21:57.557 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:57.557 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:57.557 issued rwts: total=0,2604,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:57.557 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:57.557 job4: (groupid=0, jobs=1): err= 0: pid=102495: Sun Jul 14 20:21:45 2024 00:21:57.557 write: IOPS=457, BW=114MiB/s (120MB/s)(1159MiB/10121msec); 0 zone resets 00:21:57.557 slat (usec): min=29, max=49728, avg=2151.53, stdev=3721.95 00:21:57.557 clat (msec): min=9, max=260, avg=137.54, stdev=16.18 00:21:57.557 lat (msec): min=9, max=260, avg=139.69, stdev=15.99 00:21:57.557 clat percentiles (msec): 00:21:57.557 | 1.00th=[ 120], 5.00th=[ 126], 10.00th=[ 128], 20.00th=[ 131], 00:21:57.557 | 30.00th=[ 134], 40.00th=[ 136], 50.00th=[ 136], 60.00th=[ 138], 00:21:57.557 | 70.00th=[ 140], 80.00th=[ 142], 90.00th=[ 144], 95.00th=[ 155], 00:21:57.557 | 99.00th=[ 207], 99.50th=[ 211], 99.90th=[ 251], 99.95th=[ 253], 00:21:57.557 | 99.99th=[ 262] 00:21:57.557 bw ( KiB/s): min=88753, max=122880, per=10.58%, avg=117026.45, stdev=7534.60, samples=20 00:21:57.557 iops : min= 346, max= 480, avg=457.10, stdev=29.57, samples=20 00:21:57.557 lat (msec) : 10=0.02%, 20=0.09%, 50=0.26%, 100=0.43%, 250=99.07% 00:21:57.557 lat (msec) : 500=0.13% 00:21:57.557 cpu : usr=1.81%, sys=1.51%, ctx=5545, majf=0, minf=1 00:21:57.557 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:21:57.557 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:57.557 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:57.557 issued rwts: total=0,4634,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:57.557 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:57.557 job5: (groupid=0, jobs=1): err= 0: pid=102496: Sun Jul 14 20:21:45 2024 00:21:57.557 write: IOPS=699, BW=175MiB/s (183MB/s)(1763MiB/10085msec); 0 zone resets 00:21:57.557 slat (usec): min=20, max=7563, avg=1413.66, stdev=2379.29 00:21:57.557 clat (msec): min=4, max=177, avg=90.06, stdev= 8.24 00:21:57.557 lat (msec): min=4, max=177, avg=91.48, stdev= 8.05 00:21:57.557 clat percentiles (msec): 00:21:57.557 | 1.00th=[ 73], 5.00th=[ 84], 10.00th=[ 85], 20.00th=[ 87], 00:21:57.557 | 30.00th=[ 89], 40.00th=[ 90], 50.00th=[ 90], 60.00th=[ 91], 00:21:57.557 | 70.00th=[ 92], 80.00th=[ 94], 90.00th=[ 96], 95.00th=[ 97], 00:21:57.557 | 99.00th=[ 106], 99.50th=[ 126], 99.90th=[ 167], 99.95th=[ 171], 00:21:57.557 | 99.99th=[ 178] 00:21:57.557 bw ( KiB/s): min=164864, max=184832, per=16.18%, avg=178944.00, stdev=4984.83, samples=20 00:21:57.557 iops : min= 644, max= 722, avg=699.00, stdev=19.47, samples=20 00:21:57.557 lat (msec) : 10=0.13%, 20=0.11%, 50=0.34%, 100=96.98%, 250=2.44% 00:21:57.557 cpu : usr=1.81%, sys=1.51%, ctx=8643, majf=0, minf=1 00:21:57.557 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:21:57.557 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:57.557 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:57.557 issued rwts: total=0,7053,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:57.557 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:57.557 job6: (groupid=0, jobs=1): err= 0: pid=102497: Sun Jul 14 20:21:45 2024 00:21:57.557 write: IOPS=251, BW=62.8MiB/s (65.8MB/s)(641MiB/10210msec); 0 zone resets 00:21:57.557 slat (usec): min=22, max=64292, avg=3893.75, stdev=7343.84 00:21:57.557 clat (msec): min=45, max=489, avg=250.81, stdev=36.40 00:21:57.557 lat (msec): min=45, max=489, avg=254.70, stdev=36.09 00:21:57.557 clat percentiles (msec): 00:21:57.557 | 1.00th=[ 123], 5.00th=[ 192], 10.00th=[ 213], 20.00th=[ 232], 00:21:57.557 | 30.00th=[ 245], 40.00th=[ 251], 50.00th=[ 255], 60.00th=[ 262], 00:21:57.557 | 70.00th=[ 268], 80.00th=[ 271], 90.00th=[ 279], 95.00th=[ 284], 00:21:57.557 | 99.00th=[ 372], 99.50th=[ 439], 99.90th=[ 472], 99.95th=[ 489], 00:21:57.557 | 99.99th=[ 489] 00:21:57.557 bw ( KiB/s): min=55296, max=77668, per=5.79%, avg=64017.80, stdev=5096.52, samples=20 00:21:57.557 iops : min= 216, max= 303, avg=250.05, stdev=19.85, samples=20 00:21:57.557 lat (msec) : 50=0.16%, 100=0.62%, 250=36.66%, 500=62.56% 00:21:57.557 cpu : usr=0.80%, sys=0.95%, ctx=2097, majf=0, minf=1 00:21:57.557 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.5% 00:21:57.557 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:57.557 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:57.557 issued rwts: total=0,2564,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:57.557 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:57.557 job7: (groupid=0, jobs=1): err= 0: pid=102498: Sun Jul 14 20:21:45 2024 00:21:57.557 write: IOPS=309, BW=77.3MiB/s (81.1MB/s)(790MiB/10218msec); 0 zone resets 00:21:57.557 slat (usec): min=20, max=55553, avg=3153.20, stdev=6669.55 00:21:57.557 clat (msec): min=4, max=468, avg=203.68, stdev=95.19 00:21:57.557 lat (msec): min=4, max=468, avg=206.84, stdev=96.41 00:21:57.557 clat percentiles (msec): 00:21:57.557 | 1.00th=[ 41], 5.00th=[ 44], 10.00th=[ 46], 20.00th=[ 50], 00:21:57.557 | 30.00th=[ 215], 40.00th=[ 241], 50.00th=[ 253], 60.00th=[ 259], 00:21:57.557 | 70.00th=[ 266], 80.00th=[ 271], 90.00th=[ 275], 95.00th=[ 284], 00:21:57.557 | 99.00th=[ 334], 99.50th=[ 397], 99.90th=[ 451], 99.95th=[ 468], 00:21:57.557 | 99.99th=[ 468] 00:21:57.557 bw ( KiB/s): min=57344, max=329580, per=7.16%, avg=79218.65, stdev=60522.44, samples=20 00:21:57.557 iops : min= 224, max= 1287, avg=309.40, stdev=236.33, samples=20 00:21:57.557 lat (msec) : 10=0.13%, 20=0.25%, 50=21.87%, 100=3.20%, 250=21.17% 00:21:57.557 lat (msec) : 500=53.39% 00:21:57.557 cpu : usr=0.84%, sys=0.80%, ctx=3584, majf=0, minf=1 00:21:57.557 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:21:57.557 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:57.557 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:57.557 issued rwts: total=0,3160,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:57.557 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:57.557 job8: (groupid=0, jobs=1): err= 0: pid=102499: Sun Jul 14 20:21:45 2024 00:21:57.557 write: IOPS=455, BW=114MiB/s (119MB/s)(1154MiB/10130msec); 0 zone resets 00:21:57.557 slat (usec): min=21, max=89934, avg=2164.71, stdev=3924.69 00:21:57.557 clat (msec): min=3, max=262, avg=138.26, stdev=19.78 00:21:57.557 lat (msec): min=3, max=262, avg=140.43, stdev=19.69 00:21:57.557 clat percentiles (msec): 00:21:57.557 | 1.00th=[ 102], 5.00th=[ 126], 10.00th=[ 128], 20.00th=[ 131], 00:21:57.557 | 30.00th=[ 134], 40.00th=[ 136], 50.00th=[ 136], 60.00th=[ 138], 00:21:57.557 | 70.00th=[ 140], 80.00th=[ 142], 90.00th=[ 146], 95.00th=[ 165], 00:21:57.557 | 99.00th=[ 234], 99.50th=[ 234], 99.90th=[ 255], 99.95th=[ 255], 00:21:57.557 | 99.99th=[ 264] 00:21:57.557 bw ( KiB/s): min=81245, max=122880, per=10.54%, avg=116532.30, stdev=9139.49, samples=20 00:21:57.557 iops : min= 317, max= 480, avg=455.05, stdev=35.78, samples=20 00:21:57.557 lat (msec) : 4=0.09%, 10=0.07%, 20=0.09%, 50=0.35%, 100=0.35% 00:21:57.557 lat (msec) : 250=98.94%, 500=0.13% 00:21:57.557 cpu : usr=1.12%, sys=1.10%, ctx=6148, majf=0, minf=1 00:21:57.557 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:21:57.557 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:57.557 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:57.557 issued rwts: total=0,4615,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:57.557 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:57.557 job9: (groupid=0, jobs=1): err= 0: pid=102500: Sun Jul 14 20:21:45 2024 00:21:57.557 write: IOPS=701, BW=175MiB/s (184MB/s)(1769MiB/10080msec); 0 zone resets 00:21:57.557 slat (usec): min=19, max=7562, avg=1378.92, stdev=2366.17 00:21:57.557 clat (msec): min=4, max=227, avg=89.76, stdev=12.04 00:21:57.557 lat (msec): min=4, max=228, avg=91.14, stdev=11.96 00:21:57.557 clat percentiles (msec): 00:21:57.557 | 1.00th=[ 31], 5.00th=[ 83], 10.00th=[ 85], 20.00th=[ 87], 00:21:57.557 | 30.00th=[ 88], 40.00th=[ 90], 50.00th=[ 90], 60.00th=[ 91], 00:21:57.557 | 70.00th=[ 92], 80.00th=[ 94], 90.00th=[ 96], 95.00th=[ 99], 00:21:57.557 | 99.00th=[ 122], 99.50th=[ 157], 99.90th=[ 207], 99.95th=[ 222], 00:21:57.557 | 99.99th=[ 228] 00:21:57.557 bw ( KiB/s): min=164864, max=191358, per=16.23%, avg=179526.30, stdev=5646.38, samples=20 00:21:57.557 iops : min= 644, max= 747, avg=701.25, stdev=22.00, samples=20 00:21:57.557 lat (msec) : 10=0.11%, 20=0.31%, 50=1.29%, 100=95.45%, 250=2.84% 00:21:57.557 cpu : usr=1.70%, sys=1.61%, ctx=8485, majf=0, minf=1 00:21:57.557 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:21:57.557 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:57.557 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:57.557 issued rwts: total=0,7075,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:57.557 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:57.557 job10: (groupid=0, jobs=1): err= 0: pid=102502: Sun Jul 14 20:21:45 2024 00:21:57.557 write: IOPS=259, BW=64.8MiB/s (68.0MB/s)(662MiB/10214msec); 0 zone resets 00:21:57.557 slat (usec): min=30, max=98908, avg=3767.92, stdev=7115.89 00:21:57.557 clat (msec): min=11, max=493, avg=242.74, stdev=32.02 00:21:57.557 lat (msec): min=11, max=493, avg=246.51, stdev=31.59 00:21:57.557 clat percentiles (msec): 00:21:57.557 | 1.00th=[ 174], 5.00th=[ 201], 10.00th=[ 213], 20.00th=[ 224], 00:21:57.557 | 30.00th=[ 232], 40.00th=[ 236], 50.00th=[ 241], 60.00th=[ 247], 00:21:57.557 | 70.00th=[ 253], 80.00th=[ 259], 90.00th=[ 271], 95.00th=[ 279], 00:21:57.557 | 99.00th=[ 372], 99.50th=[ 443], 99.90th=[ 477], 99.95th=[ 493], 00:21:57.557 | 99.99th=[ 493] 00:21:57.557 bw ( KiB/s): min=57856, max=73728, per=5.99%, avg=66201.60, stdev=4651.51, samples=20 00:21:57.557 iops : min= 226, max= 288, avg=258.60, stdev=18.17, samples=20 00:21:57.557 lat (msec) : 20=0.04%, 250=63.76%, 500=36.20% 00:21:57.557 cpu : usr=0.82%, sys=1.07%, ctx=1468, majf=0, minf=1 00:21:57.557 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:21:57.557 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:57.557 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:57.558 issued rwts: total=0,2649,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:57.558 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:57.558 00:21:57.558 Run status group 0 (all jobs): 00:21:57.558 WRITE: bw=1080MiB/s (1132MB/s), 61.5MiB/s-175MiB/s (64.5MB/s-184MB/s), io=10.8GiB (11.6GB), run=10080-10226msec 00:21:57.558 00:21:57.558 Disk stats (read/write): 00:21:57.558 nvme0n1: ios=49/4897, merge=0/0, ticks=43/1202682, in_queue=1202725, util=97.84% 00:21:57.558 nvme10n1: ios=49/5098, merge=0/0, ticks=30/1203787, in_queue=1203817, util=97.83% 00:21:57.558 nvme1n1: ios=21/9237, merge=0/0, ticks=21/1212463, in_queue=1212484, util=97.86% 00:21:57.558 nvme2n1: ios=5/5076, merge=0/0, ticks=5/1205002, in_queue=1205007, util=98.06% 00:21:57.558 nvme3n1: ios=5/9135, merge=0/0, ticks=8/1212311, in_queue=1212319, util=98.05% 00:21:57.558 nvme4n1: ios=0/13978, merge=0/0, ticks=0/1216466, in_queue=1216466, util=98.33% 00:21:57.558 nvme5n1: ios=0/4997, merge=0/0, ticks=0/1202319, in_queue=1202319, util=98.26% 00:21:57.558 nvme6n1: ios=0/6187, merge=0/0, ticks=0/1204119, in_queue=1204119, util=98.39% 00:21:57.558 nvme7n1: ios=0/9104, merge=0/0, ticks=0/1215177, in_queue=1215177, util=98.81% 00:21:57.558 nvme8n1: ios=0/14009, merge=0/0, ticks=0/1216329, in_queue=1216329, util=98.77% 00:21:57.558 nvme9n1: ios=0/5172, merge=0/0, ticks=0/1202620, in_queue=1202620, util=98.84% 00:21:57.558 20:21:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:21:57.558 20:21:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:21:57.558 20:21:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:57.558 20:21:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:57.558 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:57.558 20:21:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:21:57.558 20:21:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:21:57.558 20:21:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:21:57.558 20:21:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK1 00:21:57.558 20:21:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:21:57.558 20:21:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK1 00:21:57.558 20:21:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:21:57.558 20:21:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:57.558 20:21:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.558 20:21:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:57.558 20:21:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.558 20:21:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:57.558 20:21:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:21:57.558 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:21:57.558 20:21:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:21:57.558 20:21:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:21:57.558 20:21:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:21:57.558 20:21:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK2 00:21:57.558 20:21:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:21:57.558 20:21:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK2 00:21:57.558 20:21:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:21:57.558 20:21:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:57.558 20:21:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.558 20:21:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:57.558 20:21:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.558 20:21:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:57.558 20:21:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:21:57.558 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:21:57.558 20:21:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:21:57.558 20:21:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:21:57.558 20:21:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:21:57.558 20:21:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK3 00:21:57.558 20:21:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:21:57.558 20:21:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK3 00:21:57.558 20:21:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:21:57.558 20:21:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:21:57.558 20:21:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.558 20:21:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:57.558 20:21:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.558 20:21:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:57.558 20:21:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:21:57.558 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:21:57.558 20:21:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:21:57.558 20:21:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:21:57.558 20:21:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:21:57.558 20:21:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK4 00:21:57.558 20:21:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK4 00:21:57.558 20:21:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:21:57.558 20:21:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:21:57.558 20:21:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:21:57.558 20:21:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.558 20:21:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:57.558 20:21:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.558 20:21:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:57.558 20:21:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:21:57.558 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:21:57.558 20:21:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:21:57.558 20:21:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:21:57.558 20:21:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:21:57.558 20:21:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK5 00:21:57.558 20:21:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:21:57.558 20:21:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK5 00:21:57.558 20:21:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:21:57.558 20:21:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:21:57.558 20:21:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.558 20:21:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:57.558 20:21:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.558 20:21:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:57.558 20:21:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:21:57.558 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:21:57.558 20:21:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:21:57.558 20:21:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:21:57.558 20:21:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:21:57.558 20:21:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK6 00:21:57.558 20:21:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:21:57.558 20:21:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK6 00:21:57.558 20:21:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:21:57.558 20:21:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:21:57.558 20:21:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.558 20:21:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:57.558 20:21:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.558 20:21:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:57.558 20:21:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:21:57.558 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:21:57.558 20:21:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:21:57.558 20:21:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:21:57.558 20:21:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:21:57.558 20:21:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK7 00:21:57.558 20:21:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:21:57.558 20:21:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK7 00:21:57.558 20:21:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:21:57.558 20:21:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:21:57.558 20:21:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.558 20:21:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:57.558 20:21:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.558 20:21:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:57.558 20:21:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:21:57.817 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:21:57.817 20:21:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:21:57.817 20:21:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:21:57.817 20:21:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:21:57.817 20:21:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK8 00:21:57.817 20:21:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:21:57.817 20:21:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK8 00:21:57.817 20:21:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:21:57.817 20:21:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:21:57.817 20:21:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.817 20:21:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:57.817 20:21:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.817 20:21:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:57.817 20:21:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:21:57.817 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:21:57.817 20:21:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:21:57.817 20:21:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:21:57.817 20:21:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:21:57.817 20:21:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK9 00:21:57.817 20:21:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK9 00:21:57.817 20:21:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:21:57.817 20:21:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:21:57.817 20:21:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:21:57.817 20:21:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.817 20:21:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:57.817 20:21:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.818 20:21:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:57.818 20:21:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:21:58.076 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:21:58.076 20:21:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:21:58.076 20:21:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:21:58.076 20:21:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:21:58.076 20:21:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK10 00:21:58.076 20:21:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK10 00:21:58.076 20:21:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:21:58.076 20:21:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:21:58.076 20:21:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:21:58.076 20:21:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.076 20:21:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:58.076 20:21:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.076 20:21:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:58.076 20:21:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:21:58.076 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:21:58.076 20:21:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:21:58.076 20:21:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:21:58.076 20:21:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:21:58.076 20:21:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK11 00:21:58.076 20:21:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:21:58.076 20:21:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK11 00:21:58.076 20:21:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:21:58.076 20:21:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:21:58.076 20:21:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.076 20:21:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:58.076 20:21:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.076 20:21:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:21:58.076 20:21:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:21:58.076 20:21:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:21:58.076 20:21:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:58.076 20:21:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:21:58.076 20:21:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:58.076 20:21:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:21:58.076 20:21:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:58.076 20:21:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:58.076 rmmod nvme_tcp 00:21:58.076 rmmod nvme_fabrics 00:21:58.076 rmmod nvme_keyring 00:21:58.076 20:21:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:58.076 20:21:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:21:58.076 20:21:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:21:58.076 20:21:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 101797 ']' 00:21:58.076 20:21:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 101797 00:21:58.076 20:21:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@946 -- # '[' -z 101797 ']' 00:21:58.076 20:21:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@950 -- # kill -0 101797 00:21:58.076 20:21:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@951 -- # uname 00:21:58.076 20:21:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:58.076 20:21:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 101797 00:21:58.076 killing process with pid 101797 00:21:58.076 20:21:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:21:58.076 20:21:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:21:58.076 20:21:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@964 -- # echo 'killing process with pid 101797' 00:21:58.076 20:21:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@965 -- # kill 101797 00:21:58.076 20:21:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@970 -- # wait 101797 00:21:59.012 20:21:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:59.012 20:21:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:59.012 20:21:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:59.012 20:21:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:59.012 20:21:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:59.012 20:21:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:59.012 20:21:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:59.012 20:21:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:59.012 20:21:47 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:59.012 00:21:59.012 real 0m50.345s 00:21:59.012 user 2m49.066s 00:21:59.012 sys 0m24.418s 00:21:59.012 20:21:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:59.012 20:21:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:59.012 ************************************ 00:21:59.012 END TEST nvmf_multiconnection 00:21:59.012 ************************************ 00:21:59.012 20:21:47 nvmf_tcp -- nvmf/nvmf.sh@68 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:21:59.012 20:21:47 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:21:59.012 20:21:47 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:59.012 20:21:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:59.012 ************************************ 00:21:59.012 START TEST nvmf_initiator_timeout 00:21:59.012 ************************************ 00:21:59.012 20:21:47 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:21:59.012 * Looking for test storage... 00:21:59.012 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:59.012 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:59.012 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:21:59.012 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:59.012 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:59.012 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:59.012 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:59.012 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:59.012 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:59.012 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:59.012 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:59.012 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:59.012 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:59.012 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:21:59.012 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:21:59.012 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:59.012 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:59.012 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:59.012 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:59.012 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:59.012 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:59.012 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:59.012 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:59.013 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.013 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.013 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.013 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:21:59.013 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.013 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:21:59.013 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:59.013 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:59.013 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:59.013 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:59.013 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:59.013 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:59.013 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:59.013 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:59.013 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:59.013 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:59.013 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:21:59.271 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:59.271 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:59.271 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:59.271 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:59.271 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:59.271 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:59.271 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:59.271 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:59.271 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:21:59.271 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:21:59.271 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:21:59.271 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:21:59.271 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:21:59.271 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@432 -- # nvmf_veth_init 00:21:59.271 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:59.271 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:59.271 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:59.271 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:59.271 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:59.271 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:59.271 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:59.271 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:59.271 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:59.271 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:59.271 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:59.271 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:59.271 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:59.271 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:59.271 Cannot find device "nvmf_tgt_br" 00:21:59.271 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@155 -- # true 00:21:59.271 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:59.271 Cannot find device "nvmf_tgt_br2" 00:21:59.271 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@156 -- # true 00:21:59.271 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:59.271 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:59.271 Cannot find device "nvmf_tgt_br" 00:21:59.271 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@158 -- # true 00:21:59.271 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:59.271 Cannot find device "nvmf_tgt_br2" 00:21:59.271 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@159 -- # true 00:21:59.271 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:59.271 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:59.271 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:59.271 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:59.271 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # true 00:21:59.271 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:59.271 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:59.271 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # true 00:21:59.271 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:59.271 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:59.271 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:59.271 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:59.271 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:59.271 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:59.271 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:59.271 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:59.271 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:59.271 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:59.271 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:59.271 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:59.271 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:59.271 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:59.530 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:59.530 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:59.530 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:59.530 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:59.530 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:59.530 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:59.530 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:59.530 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:59.530 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:59.531 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:59.531 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:59.531 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.120 ms 00:21:59.531 00:21:59.531 --- 10.0.0.2 ping statistics --- 00:21:59.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:59.531 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:21:59.531 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:59.531 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:59.531 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:21:59.531 00:21:59.531 --- 10.0.0.3 ping statistics --- 00:21:59.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:59.531 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:21:59.531 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:59.531 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:59.531 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:21:59.531 00:21:59.531 --- 10.0.0.1 ping statistics --- 00:21:59.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:59.531 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:21:59.531 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:59.531 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@433 -- # return 0 00:21:59.531 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:59.531 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:59.531 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:59.531 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:59.531 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:59.531 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:59.531 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:59.531 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:21:59.531 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:59.531 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:59.531 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:59.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:59.531 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=102872 00:21:59.531 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 102872 00:21:59.531 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:59.531 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@827 -- # '[' -z 102872 ']' 00:21:59.531 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:59.531 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:59.531 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:59.531 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:59.531 20:21:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:59.531 [2024-07-14 20:21:48.544955] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:21:59.531 [2024-07-14 20:21:48.545061] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:59.789 [2024-07-14 20:21:48.682454] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:59.789 [2024-07-14 20:21:48.794440] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:59.789 [2024-07-14 20:21:48.794783] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:59.789 [2024-07-14 20:21:48.795042] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:59.789 [2024-07-14 20:21:48.795173] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:59.789 [2024-07-14 20:21:48.795208] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:59.789 [2024-07-14 20:21:48.795480] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:59.789 [2024-07-14 20:21:48.795637] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:59.789 [2024-07-14 20:21:48.795707] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:59.789 [2024-07-14 20:21:48.795708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:00.725 20:21:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:00.725 20:21:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@860 -- # return 0 00:22:00.725 20:21:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:00.725 20:21:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:00.725 20:21:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:00.725 20:21:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:00.725 20:21:49 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:22:00.725 20:21:49 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:00.725 20:21:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.725 20:21:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:00.725 Malloc0 00:22:00.725 20:21:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.725 20:21:49 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:22:00.725 20:21:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.725 20:21:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:00.725 Delay0 00:22:00.725 20:21:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.725 20:21:49 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:00.725 20:21:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.725 20:21:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:00.725 [2024-07-14 20:21:49.665226] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:00.725 20:21:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.725 20:21:49 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:22:00.725 20:21:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.725 20:21:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:00.725 20:21:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.725 20:21:49 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:00.725 20:21:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.725 20:21:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:00.725 20:21:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.725 20:21:49 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:00.726 20:21:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.726 20:21:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:00.726 [2024-07-14 20:21:49.697431] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:00.726 20:21:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.726 20:21:49 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid=caa3dfc1-79db-49e7-95fe-b9f6785698c4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:22:00.984 20:21:49 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:22:00.984 20:21:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1194 -- # local i=0 00:22:00.984 20:21:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:22:00.984 20:21:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:22:00.984 20:21:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1201 -- # sleep 2 00:22:02.882 20:21:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:22:02.882 20:21:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:22:02.882 20:21:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:22:02.882 20:21:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:22:02.882 20:21:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:22:02.882 20:21:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # return 0 00:22:02.882 20:21:51 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=102954 00:22:02.882 20:21:51 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:22:02.882 20:21:51 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:22:02.882 [global] 00:22:02.882 thread=1 00:22:02.882 invalidate=1 00:22:02.882 rw=write 00:22:02.882 time_based=1 00:22:02.882 runtime=60 00:22:02.882 ioengine=libaio 00:22:02.882 direct=1 00:22:02.882 bs=4096 00:22:02.882 iodepth=1 00:22:02.882 norandommap=0 00:22:02.882 numjobs=1 00:22:02.882 00:22:02.882 verify_dump=1 00:22:02.882 verify_backlog=512 00:22:02.882 verify_state_save=0 00:22:02.882 do_verify=1 00:22:02.882 verify=crc32c-intel 00:22:02.882 [job0] 00:22:02.882 filename=/dev/nvme0n1 00:22:02.882 Could not set queue depth (nvme0n1) 00:22:03.139 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:03.139 fio-3.35 00:22:03.139 Starting 1 thread 00:22:06.446 20:21:54 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:22:06.446 20:21:54 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.446 20:21:54 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:06.446 true 00:22:06.446 20:21:54 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.446 20:21:54 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:22:06.446 20:21:54 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.446 20:21:54 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:06.446 true 00:22:06.446 20:21:54 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.446 20:21:54 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:22:06.446 20:21:54 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.446 20:21:54 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:06.446 true 00:22:06.446 20:21:54 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.446 20:21:54 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:22:06.446 20:21:54 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.446 20:21:54 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:06.446 true 00:22:06.446 20:21:54 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.446 20:21:54 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:22:08.972 20:21:57 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:22:08.972 20:21:57 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.972 20:21:57 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:08.972 true 00:22:08.972 20:21:57 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.972 20:21:57 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:22:08.972 20:21:57 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.972 20:21:57 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:08.972 true 00:22:08.972 20:21:57 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.972 20:21:57 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:22:08.972 20:21:57 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.972 20:21:57 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:08.972 true 00:22:08.972 20:21:57 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.972 20:21:57 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:22:08.972 20:21:57 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.972 20:21:57 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:08.972 true 00:22:08.972 20:21:57 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.972 20:21:57 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:22:08.972 20:21:57 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 102954 00:23:05.185 00:23:05.185 job0: (groupid=0, jobs=1): err= 0: pid=102975: Sun Jul 14 20:22:52 2024 00:23:05.185 read: IOPS=801, BW=3208KiB/s (3285kB/s)(188MiB/60000msec) 00:23:05.185 slat (usec): min=12, max=104, avg=15.93, stdev= 4.85 00:23:05.185 clat (usec): min=145, max=1867, avg=206.39, stdev=33.62 00:23:05.185 lat (usec): min=174, max=1883, avg=222.33, stdev=34.06 00:23:05.185 clat percentiles (usec): 00:23:05.185 | 1.00th=[ 165], 5.00th=[ 169], 10.00th=[ 174], 20.00th=[ 182], 00:23:05.185 | 30.00th=[ 192], 40.00th=[ 198], 50.00th=[ 204], 60.00th=[ 210], 00:23:05.185 | 70.00th=[ 217], 80.00th=[ 225], 90.00th=[ 239], 95.00th=[ 251], 00:23:05.185 | 99.00th=[ 289], 99.50th=[ 306], 99.90th=[ 578], 99.95th=[ 709], 00:23:05.185 | 99.99th=[ 1012] 00:23:05.185 write: IOPS=802, BW=3209KiB/s (3286kB/s)(188MiB/60000msec); 0 zone resets 00:23:05.185 slat (usec): min=16, max=9417, avg=23.73, stdev=56.13 00:23:05.185 clat (usec): min=102, max=40393k, avg=997.12, stdev=184121.11 00:23:05.185 lat (usec): min=140, max=40393k, avg=1020.85, stdev=184121.12 00:23:05.185 clat percentiles (usec): 00:23:05.185 | 1.00th=[ 126], 5.00th=[ 130], 10.00th=[ 133], 20.00th=[ 139], 00:23:05.185 | 30.00th=[ 145], 40.00th=[ 149], 50.00th=[ 153], 60.00th=[ 159], 00:23:05.185 | 70.00th=[ 165], 80.00th=[ 174], 90.00th=[ 186], 95.00th=[ 198], 00:23:05.185 | 99.00th=[ 231], 99.50th=[ 251], 99.90th=[ 482], 99.95th=[ 586], 00:23:05.185 | 99.99th=[ 1893] 00:23:05.185 bw ( KiB/s): min= 6888, max=12288, per=100.00%, avg=9662.36, stdev=1351.24, samples=39 00:23:05.185 iops : min= 1722, max= 3072, avg=2415.59, stdev=337.81, samples=39 00:23:05.185 lat (usec) : 250=97.01%, 500=2.87%, 750=0.09%, 1000=0.02% 00:23:05.185 lat (msec) : 2=0.01%, 4=0.01%, >=2000=0.01% 00:23:05.185 cpu : usr=0.60%, sys=2.29%, ctx=96254, majf=0, minf=2 00:23:05.185 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:05.185 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:05.185 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:05.185 issued rwts: total=48118,48128,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:05.185 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:05.185 00:23:05.185 Run status group 0 (all jobs): 00:23:05.185 READ: bw=3208KiB/s (3285kB/s), 3208KiB/s-3208KiB/s (3285kB/s-3285kB/s), io=188MiB (197MB), run=60000-60000msec 00:23:05.185 WRITE: bw=3209KiB/s (3286kB/s), 3209KiB/s-3209KiB/s (3286kB/s-3286kB/s), io=188MiB (197MB), run=60000-60000msec 00:23:05.185 00:23:05.185 Disk stats (read/write): 00:23:05.185 nvme0n1: ios=47923/48128, merge=0/0, ticks=10460/8453, in_queue=18913, util=99.92% 00:23:05.185 20:22:52 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:23:05.185 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:05.185 20:22:52 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:23:05.185 20:22:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1215 -- # local i=0 00:23:05.185 20:22:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:23:05.185 20:22:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:05.185 20:22:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:05.185 20:22:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:23:05.185 nvmf hotplug test: fio successful as expected 00:23:05.185 20:22:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # return 0 00:23:05.185 20:22:52 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:23:05.185 20:22:52 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:23:05.185 20:22:52 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:05.185 20:22:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.185 20:22:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:05.185 20:22:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.185 20:22:52 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:23:05.185 20:22:52 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:23:05.185 20:22:52 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:23:05.185 20:22:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:05.185 20:22:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:23:05.185 20:22:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:05.185 20:22:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:23:05.185 20:22:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:05.185 20:22:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:05.185 rmmod nvme_tcp 00:23:05.185 rmmod nvme_fabrics 00:23:05.185 rmmod nvme_keyring 00:23:05.185 20:22:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:05.185 20:22:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:23:05.185 20:22:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:23:05.185 20:22:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 102872 ']' 00:23:05.185 20:22:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 102872 00:23:05.185 20:22:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@946 -- # '[' -z 102872 ']' 00:23:05.185 20:22:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@950 -- # kill -0 102872 00:23:05.185 20:22:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@951 -- # uname 00:23:05.185 20:22:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:05.185 20:22:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 102872 00:23:05.185 killing process with pid 102872 00:23:05.185 20:22:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:05.185 20:22:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:05.185 20:22:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@964 -- # echo 'killing process with pid 102872' 00:23:05.185 20:22:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@965 -- # kill 102872 00:23:05.185 20:22:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@970 -- # wait 102872 00:23:05.185 20:22:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:05.185 20:22:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:05.185 20:22:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:05.185 20:22:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:05.185 20:22:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:05.185 20:22:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:05.185 20:22:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:05.185 20:22:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:05.185 20:22:52 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:05.185 ************************************ 00:23:05.185 END TEST nvmf_initiator_timeout 00:23:05.185 ************************************ 00:23:05.185 00:23:05.185 real 1m4.914s 00:23:05.185 user 4m6.665s 00:23:05.185 sys 0m9.453s 00:23:05.185 20:22:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:05.185 20:22:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:05.185 20:22:52 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ virt == phy ]] 00:23:05.185 20:22:52 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:23:05.185 20:22:52 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:05.185 20:22:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:05.185 20:22:52 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:23:05.185 20:22:52 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:05.185 20:22:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:05.185 20:22:52 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:23:05.185 20:22:52 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:05.185 20:22:52 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:05.185 20:22:52 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:05.185 20:22:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:05.185 ************************************ 00:23:05.185 START TEST nvmf_multicontroller 00:23:05.185 ************************************ 00:23:05.185 20:22:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:05.185 * Looking for test storage... 00:23:05.185 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:05.185 20:22:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:05.185 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:23:05.185 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:05.185 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:05.185 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:05.185 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:05.185 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:05.185 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:05.185 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:05.185 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:05.185 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:05.185 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:05.185 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:23:05.185 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:23:05.185 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:05.185 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:05.185 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:05.185 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:05.185 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:05.185 20:22:53 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:05.185 20:22:53 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:05.185 20:22:53 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:05.186 20:22:53 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.186 20:22:53 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.186 20:22:53 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.186 20:22:53 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:23:05.186 20:22:53 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.186 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:23:05.186 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:05.186 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:05.186 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:05.186 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:05.186 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:05.186 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:05.186 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:05.186 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:05.186 20:22:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:05.186 20:22:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:05.186 20:22:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:23:05.186 20:22:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:23:05.186 20:22:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:05.186 20:22:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:23:05.186 20:22:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:23:05.186 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:05.186 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:05.186 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:05.186 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:05.186 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:05.186 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:05.186 20:22:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:05.186 20:22:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:05.186 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:23:05.186 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:23:05.186 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:23:05.186 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:23:05.186 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:23:05.186 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@432 -- # nvmf_veth_init 00:23:05.186 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:05.186 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:05.186 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:05.186 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:05.186 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:05.186 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:05.186 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:05.186 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:05.186 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:05.186 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:05.186 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:05.186 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:05.186 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:05.186 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:05.186 Cannot find device "nvmf_tgt_br" 00:23:05.186 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@155 -- # true 00:23:05.186 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:05.186 Cannot find device "nvmf_tgt_br2" 00:23:05.186 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@156 -- # true 00:23:05.186 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:05.186 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:05.186 Cannot find device "nvmf_tgt_br" 00:23:05.186 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@158 -- # true 00:23:05.186 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:05.186 Cannot find device "nvmf_tgt_br2" 00:23:05.186 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@159 -- # true 00:23:05.186 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:05.186 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:05.186 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:05.186 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:05.186 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@162 -- # true 00:23:05.186 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:05.186 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:05.186 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@163 -- # true 00:23:05.186 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:05.186 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:05.186 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:05.186 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:05.186 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:05.186 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:05.186 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:05.186 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:05.186 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:05.186 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:05.186 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:05.186 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:05.186 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:05.186 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:05.186 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:05.186 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:05.186 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:05.186 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:05.186 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:05.186 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:05.186 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:05.186 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:05.186 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:05.186 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:05.186 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:05.186 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.110 ms 00:23:05.186 00:23:05.186 --- 10.0.0.2 ping statistics --- 00:23:05.186 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:05.186 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:23:05.186 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:05.186 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:05.186 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:23:05.186 00:23:05.186 --- 10.0.0.3 ping statistics --- 00:23:05.186 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:05.186 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:23:05.186 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:05.186 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:05.186 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:23:05.186 00:23:05.186 --- 10.0.0.1 ping statistics --- 00:23:05.186 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:05.186 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:23:05.186 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:05.186 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@433 -- # return 0 00:23:05.186 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:05.186 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:05.186 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:05.186 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:05.186 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:05.186 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:05.186 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:05.186 20:22:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:23:05.187 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:05.187 20:22:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:05.187 20:22:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:05.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:05.187 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=103793 00:23:05.187 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:05.187 20:22:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 103793 00:23:05.187 20:22:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # '[' -z 103793 ']' 00:23:05.187 20:22:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:05.187 20:22:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:05.187 20:22:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:05.187 20:22:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:05.187 20:22:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:05.187 [2024-07-14 20:22:53.546400] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:05.187 [2024-07-14 20:22:53.546499] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:05.187 [2024-07-14 20:22:53.682846] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:05.187 [2024-07-14 20:22:53.801463] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:05.187 [2024-07-14 20:22:53.801526] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:05.187 [2024-07-14 20:22:53.801537] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:05.187 [2024-07-14 20:22:53.801545] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:05.187 [2024-07-14 20:22:53.801552] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:05.187 [2024-07-14 20:22:53.801728] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:05.187 [2024-07-14 20:22:53.802688] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:05.187 [2024-07-14 20:22:53.802755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:05.753 20:22:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:05.753 20:22:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@860 -- # return 0 00:23:05.753 20:22:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:05.753 20:22:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:05.753 20:22:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:05.753 20:22:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:05.753 20:22:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:05.753 20:22:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.753 20:22:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:05.753 [2024-07-14 20:22:54.635061] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:05.753 20:22:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.753 20:22:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:05.753 20:22:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.753 20:22:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:05.753 Malloc0 00:23:05.753 20:22:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.753 20:22:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:05.753 20:22:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.753 20:22:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:05.753 20:22:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.753 20:22:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:05.753 20:22:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.753 20:22:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:05.753 20:22:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.753 20:22:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:05.753 20:22:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.753 20:22:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:05.753 [2024-07-14 20:22:54.709625] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:05.753 20:22:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.753 20:22:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:05.753 20:22:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.753 20:22:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:05.753 [2024-07-14 20:22:54.717489] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:05.753 20:22:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.753 20:22:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:05.753 20:22:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.753 20:22:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:05.753 Malloc1 00:23:05.753 20:22:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.753 20:22:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:23:05.753 20:22:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.753 20:22:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:05.753 20:22:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.753 20:22:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:23:05.753 20:22:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.753 20:22:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:05.753 20:22:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.753 20:22:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:05.753 20:22:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.753 20:22:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:05.753 20:22:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.753 20:22:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:23:05.753 20:22:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.753 20:22:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:05.753 20:22:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.753 20:22:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=103851 00:23:05.753 20:22:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:23:05.754 20:22:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:05.754 20:22:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 103851 /var/tmp/bdevperf.sock 00:23:05.754 20:22:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # '[' -z 103851 ']' 00:23:05.754 20:22:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:05.754 20:22:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:05.754 20:22:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:05.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:05.754 20:22:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:05.754 20:22:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:07.127 20:22:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:07.127 20:22:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@860 -- # return 0 00:23:07.127 20:22:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:23:07.127 20:22:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.127 20:22:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:07.127 NVMe0n1 00:23:07.127 20:22:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.127 20:22:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:07.127 20:22:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.127 20:22:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:07.127 20:22:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:23:07.127 20:22:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.127 1 00:23:07.127 20:22:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:07.127 20:22:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:23:07.127 20:22:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:07.127 20:22:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:07.127 20:22:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:07.127 20:22:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:07.127 20:22:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:07.127 20:22:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:07.127 20:22:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.127 20:22:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:07.127 2024/07/14 20:22:55 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostnqn:nqn.2021-09-7.io.spdk:00001 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:23:07.127 request: 00:23:07.127 { 00:23:07.127 "method": "bdev_nvme_attach_controller", 00:23:07.127 "params": { 00:23:07.127 "name": "NVMe0", 00:23:07.127 "trtype": "tcp", 00:23:07.127 "traddr": "10.0.0.2", 00:23:07.127 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:23:07.127 "hostaddr": "10.0.0.2", 00:23:07.127 "hostsvcid": "60000", 00:23:07.127 "adrfam": "ipv4", 00:23:07.127 "trsvcid": "4420", 00:23:07.127 "subnqn": "nqn.2016-06.io.spdk:cnode1" 00:23:07.127 } 00:23:07.127 } 00:23:07.127 Got JSON-RPC error response 00:23:07.127 GoRPCClient: error on JSON-RPC call 00:23:07.127 20:22:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:07.127 20:22:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:23:07.127 20:22:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:07.127 20:22:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:07.127 20:22:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:07.127 20:22:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:07.127 20:22:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:23:07.127 20:22:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:07.127 20:22:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:07.127 20:22:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:07.127 20:22:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:07.127 20:22:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:07.127 20:22:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:07.127 20:22:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.127 20:22:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:07.127 2024/07/14 20:22:55 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:23:07.127 request: 00:23:07.127 { 00:23:07.127 "method": "bdev_nvme_attach_controller", 00:23:07.127 "params": { 00:23:07.127 "name": "NVMe0", 00:23:07.127 "trtype": "tcp", 00:23:07.127 "traddr": "10.0.0.2", 00:23:07.127 "hostaddr": "10.0.0.2", 00:23:07.127 "hostsvcid": "60000", 00:23:07.127 "adrfam": "ipv4", 00:23:07.127 "trsvcid": "4420", 00:23:07.127 "subnqn": "nqn.2016-06.io.spdk:cnode2" 00:23:07.127 } 00:23:07.127 } 00:23:07.127 Got JSON-RPC error response 00:23:07.127 GoRPCClient: error on JSON-RPC call 00:23:07.127 20:22:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:07.127 20:22:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:23:07.127 20:22:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:07.127 20:22:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:07.127 20:22:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:07.127 20:22:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:07.127 20:22:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:23:07.127 20:22:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:07.127 20:22:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:07.127 20:22:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:07.127 20:22:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:07.127 20:22:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:07.127 20:22:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:07.127 20:22:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.127 20:22:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:07.127 2024/07/14 20:22:55 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:disable name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled 00:23:07.127 request: 00:23:07.127 { 00:23:07.127 "method": "bdev_nvme_attach_controller", 00:23:07.127 "params": { 00:23:07.127 "name": "NVMe0", 00:23:07.127 "trtype": "tcp", 00:23:07.127 "traddr": "10.0.0.2", 00:23:07.127 "hostaddr": "10.0.0.2", 00:23:07.127 "hostsvcid": "60000", 00:23:07.127 "adrfam": "ipv4", 00:23:07.127 "trsvcid": "4420", 00:23:07.127 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:07.127 "multipath": "disable" 00:23:07.127 } 00:23:07.127 } 00:23:07.127 Got JSON-RPC error response 00:23:07.127 GoRPCClient: error on JSON-RPC call 00:23:07.127 20:22:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:07.127 20:22:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:23:07.127 20:22:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:07.127 20:22:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:07.127 20:22:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:07.127 20:22:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:07.127 20:22:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:23:07.127 20:22:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:07.127 20:22:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:07.127 20:22:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:07.127 20:22:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:07.127 20:22:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:07.127 20:22:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:07.127 20:22:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.127 20:22:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:07.127 2024/07/14 20:22:55 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:failover name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:23:07.127 request: 00:23:07.127 { 00:23:07.127 "method": "bdev_nvme_attach_controller", 00:23:07.127 "params": { 00:23:07.127 "name": "NVMe0", 00:23:07.127 "trtype": "tcp", 00:23:07.127 "traddr": "10.0.0.2", 00:23:07.127 "hostaddr": "10.0.0.2", 00:23:07.127 "hostsvcid": "60000", 00:23:07.128 "adrfam": "ipv4", 00:23:07.128 "trsvcid": "4420", 00:23:07.128 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:07.128 "multipath": "failover" 00:23:07.128 } 00:23:07.128 } 00:23:07.128 Got JSON-RPC error response 00:23:07.128 GoRPCClient: error on JSON-RPC call 00:23:07.128 20:22:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:07.128 20:22:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:23:07.128 20:22:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:07.128 20:22:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:07.128 20:22:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:07.128 20:22:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:07.128 20:22:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.128 20:22:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:07.128 00:23:07.128 20:22:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.128 20:22:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:07.128 20:22:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.128 20:22:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:07.128 20:22:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.128 20:22:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:23:07.128 20:22:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.128 20:22:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:07.128 00:23:07.128 20:22:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.128 20:22:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:07.128 20:22:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.128 20:22:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:07.128 20:22:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:23:07.128 20:22:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.128 20:22:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:23:07.128 20:22:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:08.499 0 00:23:08.499 20:22:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:23:08.499 20:22:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.499 20:22:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:08.499 20:22:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.499 20:22:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 103851 00:23:08.499 20:22:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # '[' -z 103851 ']' 00:23:08.499 20:22:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@950 -- # kill -0 103851 00:23:08.499 20:22:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # uname 00:23:08.499 20:22:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:08.499 20:22:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 103851 00:23:08.499 killing process with pid 103851 00:23:08.499 20:22:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:08.500 20:22:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:08.500 20:22:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 103851' 00:23:08.500 20:22:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # kill 103851 00:23:08.500 20:22:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@970 -- # wait 103851 00:23:08.500 20:22:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:08.500 20:22:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.500 20:22:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:08.500 20:22:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.500 20:22:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:08.500 20:22:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.500 20:22:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:08.758 20:22:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.758 20:22:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:23:08.758 20:22:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:23:08.758 20:22:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # read -r file 00:23:08.758 20:22:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1607 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f 00:23:08.758 20:22:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1607 -- # sort -u 00:23:08.758 20:22:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1609 -- # cat 00:23:08.758 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:23:08.758 [2024-07-14 20:22:54.848636] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:08.758 [2024-07-14 20:22:54.848765] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103851 ] 00:23:08.758 [2024-07-14 20:22:54.990679] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:08.758 [2024-07-14 20:22:55.092697] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:08.758 [2024-07-14 20:22:56.106343] bdev.c:4580:bdev_name_add: *ERROR*: Bdev name 7f85e8a9-67be-4195-a4a1-7efd90c4ac61 already exists 00:23:08.758 [2024-07-14 20:22:56.106413] bdev.c:7696:bdev_register: *ERROR*: Unable to add uuid:7f85e8a9-67be-4195-a4a1-7efd90c4ac61 alias for bdev NVMe1n1 00:23:08.758 [2024-07-14 20:22:56.106450] bdev_nvme.c:4314:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:23:08.758 Running I/O for 1 seconds... 00:23:08.758 00:23:08.758 Latency(us) 00:23:08.758 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:08.758 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:23:08.758 NVMe0n1 : 1.00 19151.58 74.81 0.00 0.00 6673.31 3991.74 14656.23 00:23:08.758 =================================================================================================================== 00:23:08.758 Total : 19151.58 74.81 0.00 0.00 6673.31 3991.74 14656.23 00:23:08.758 Received shutdown signal, test time was about 1.000000 seconds 00:23:08.758 00:23:08.758 Latency(us) 00:23:08.758 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:08.758 =================================================================================================================== 00:23:08.758 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:08.758 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:23:08.758 20:22:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1614 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:23:08.758 20:22:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # read -r file 00:23:08.758 20:22:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:23:08.758 20:22:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:08.758 20:22:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:23:08.758 20:22:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:08.758 20:22:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:23:08.758 20:22:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:08.759 20:22:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:08.759 rmmod nvme_tcp 00:23:08.759 rmmod nvme_fabrics 00:23:08.759 rmmod nvme_keyring 00:23:08.759 20:22:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:08.759 20:22:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:23:08.759 20:22:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:23:08.759 20:22:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 103793 ']' 00:23:08.759 20:22:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 103793 00:23:08.759 20:22:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # '[' -z 103793 ']' 00:23:08.759 20:22:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@950 -- # kill -0 103793 00:23:08.759 20:22:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # uname 00:23:08.759 20:22:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:08.759 20:22:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 103793 00:23:08.759 killing process with pid 103793 00:23:08.759 20:22:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:08.759 20:22:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:08.759 20:22:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 103793' 00:23:08.759 20:22:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # kill 103793 00:23:08.759 20:22:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@970 -- # wait 103793 00:23:09.324 20:22:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:09.324 20:22:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:09.324 20:22:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:09.324 20:22:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:09.324 20:22:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:09.324 20:22:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:09.324 20:22:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:09.324 20:22:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:09.324 20:22:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:09.324 ************************************ 00:23:09.324 END TEST nvmf_multicontroller 00:23:09.324 ************************************ 00:23:09.324 00:23:09.324 real 0m5.181s 00:23:09.324 user 0m16.024s 00:23:09.324 sys 0m1.179s 00:23:09.324 20:22:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:09.324 20:22:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:09.324 20:22:58 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:09.324 20:22:58 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:09.324 20:22:58 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:09.324 20:22:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:09.324 ************************************ 00:23:09.324 START TEST nvmf_aer 00:23:09.324 ************************************ 00:23:09.324 20:22:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:09.324 * Looking for test storage... 00:23:09.324 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:09.324 20:22:58 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:09.324 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:23:09.324 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:09.324 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:09.324 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:09.324 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:09.324 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:09.324 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:09.324 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:09.324 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:09.324 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:09.324 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:09.324 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:23:09.324 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:23:09.324 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:09.324 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:09.324 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:09.324 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:09.324 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:09.324 20:22:58 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:09.324 20:22:58 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:09.324 20:22:58 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:09.324 20:22:58 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.324 20:22:58 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.324 20:22:58 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.324 20:22:58 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:23:09.324 20:22:58 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.324 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:23:09.324 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:09.324 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:09.324 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:09.324 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:09.324 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:09.324 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:09.324 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:09.324 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:09.324 20:22:58 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:23:09.324 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:09.324 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:09.324 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:09.324 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:09.324 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:09.324 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:09.324 20:22:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:09.324 20:22:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:09.324 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:23:09.324 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:23:09.324 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:23:09.324 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:23:09.324 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:23:09.324 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@432 -- # nvmf_veth_init 00:23:09.324 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:09.324 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:09.324 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:09.324 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:09.324 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:09.324 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:09.324 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:09.324 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:09.324 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:09.324 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:09.324 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:09.324 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:09.324 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:09.324 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:09.324 Cannot find device "nvmf_tgt_br" 00:23:09.324 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@155 -- # true 00:23:09.324 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:09.582 Cannot find device "nvmf_tgt_br2" 00:23:09.582 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@156 -- # true 00:23:09.582 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:09.582 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:09.582 Cannot find device "nvmf_tgt_br" 00:23:09.582 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@158 -- # true 00:23:09.582 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:09.582 Cannot find device "nvmf_tgt_br2" 00:23:09.582 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@159 -- # true 00:23:09.582 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:09.582 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:09.582 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:09.582 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:09.582 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@162 -- # true 00:23:09.582 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:09.582 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:09.582 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@163 -- # true 00:23:09.582 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:09.582 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:09.582 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:09.582 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:09.582 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:09.582 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:09.582 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:09.582 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:09.582 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:09.582 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:09.582 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:09.582 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:09.582 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:09.582 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:09.582 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:09.582 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:09.582 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:09.582 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:09.582 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:09.582 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:09.582 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:09.840 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:09.840 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:09.840 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:09.840 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:09.840 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:23:09.840 00:23:09.840 --- 10.0.0.2 ping statistics --- 00:23:09.840 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:09.840 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:23:09.840 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:09.840 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:09.840 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:23:09.840 00:23:09.840 --- 10.0.0.3 ping statistics --- 00:23:09.840 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:09.840 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:23:09.840 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:09.840 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:09.840 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:23:09.840 00:23:09.840 --- 10.0.0.1 ping statistics --- 00:23:09.840 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:09.840 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:23:09.840 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:09.840 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@433 -- # return 0 00:23:09.840 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:09.840 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:09.840 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:09.840 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:09.840 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:09.841 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:09.841 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:09.841 20:22:58 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:23:09.841 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:09.841 20:22:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:09.841 20:22:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:09.841 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=104100 00:23:09.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:09.841 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 104100 00:23:09.841 20:22:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:09.841 20:22:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@827 -- # '[' -z 104100 ']' 00:23:09.841 20:22:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:09.841 20:22:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:09.841 20:22:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:09.841 20:22:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:09.841 20:22:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:09.841 [2024-07-14 20:22:58.783726] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:09.841 [2024-07-14 20:22:58.783838] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:10.098 [2024-07-14 20:22:58.926416] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:10.098 [2024-07-14 20:22:59.026914] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:10.098 [2024-07-14 20:22:59.027341] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:10.098 [2024-07-14 20:22:59.027366] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:10.098 [2024-07-14 20:22:59.027379] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:10.098 [2024-07-14 20:22:59.027388] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:10.098 [2024-07-14 20:22:59.027567] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:10.098 [2024-07-14 20:22:59.028720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:10.098 [2024-07-14 20:22:59.028915] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:10.098 [2024-07-14 20:22:59.028925] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:11.032 20:22:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:11.032 20:22:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@860 -- # return 0 00:23:11.032 20:22:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:11.032 20:22:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:11.032 20:22:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:11.032 20:22:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:11.032 20:22:59 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:11.032 20:22:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.032 20:22:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:11.032 [2024-07-14 20:22:59.845145] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:11.032 20:22:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.032 20:22:59 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:23:11.032 20:22:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.032 20:22:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:11.032 Malloc0 00:23:11.032 20:22:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.032 20:22:59 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:23:11.032 20:22:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.032 20:22:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:11.032 20:22:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.032 20:22:59 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:11.032 20:22:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.032 20:22:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:11.032 20:22:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.032 20:22:59 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:11.032 20:22:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.032 20:22:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:11.032 [2024-07-14 20:22:59.909362] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:11.032 20:22:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.032 20:22:59 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:23:11.032 20:22:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.032 20:22:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:11.032 [ 00:23:11.032 { 00:23:11.032 "allow_any_host": true, 00:23:11.032 "hosts": [], 00:23:11.032 "listen_addresses": [], 00:23:11.032 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:11.032 "subtype": "Discovery" 00:23:11.032 }, 00:23:11.032 { 00:23:11.032 "allow_any_host": true, 00:23:11.032 "hosts": [], 00:23:11.032 "listen_addresses": [ 00:23:11.032 { 00:23:11.032 "adrfam": "IPv4", 00:23:11.032 "traddr": "10.0.0.2", 00:23:11.032 "trsvcid": "4420", 00:23:11.032 "trtype": "TCP" 00:23:11.032 } 00:23:11.032 ], 00:23:11.032 "max_cntlid": 65519, 00:23:11.032 "max_namespaces": 2, 00:23:11.032 "min_cntlid": 1, 00:23:11.032 "model_number": "SPDK bdev Controller", 00:23:11.032 "namespaces": [ 00:23:11.032 { 00:23:11.032 "bdev_name": "Malloc0", 00:23:11.032 "name": "Malloc0", 00:23:11.032 "nguid": "5AFC1ACC1166424FB5417EFEDD289A63", 00:23:11.032 "nsid": 1, 00:23:11.032 "uuid": "5afc1acc-1166-424f-b541-7efedd289a63" 00:23:11.032 } 00:23:11.032 ], 00:23:11.032 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:11.032 "serial_number": "SPDK00000000000001", 00:23:11.032 "subtype": "NVMe" 00:23:11.032 } 00:23:11.032 ] 00:23:11.032 20:22:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.032 20:22:59 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:23:11.032 20:22:59 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:23:11.032 20:22:59 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=104154 00:23:11.032 20:22:59 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:23:11.032 20:22:59 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:23:11.032 20:22:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1261 -- # local i=0 00:23:11.032 20:22:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:11.032 20:22:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 0 -lt 200 ']' 00:23:11.032 20:22:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=1 00:23:11.032 20:22:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:23:11.032 20:23:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:11.032 20:23:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 1 -lt 200 ']' 00:23:11.032 20:23:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=2 00:23:11.032 20:23:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:23:11.290 20:23:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:11.290 20:23:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:11.290 20:23:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # return 0 00:23:11.290 20:23:00 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:23:11.290 20:23:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.290 20:23:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:11.290 Malloc1 00:23:11.290 20:23:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.290 20:23:00 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:23:11.290 20:23:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.290 20:23:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:11.290 20:23:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.290 20:23:00 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:23:11.290 20:23:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.290 20:23:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:11.290 Asynchronous Event Request test 00:23:11.290 Attaching to 10.0.0.2 00:23:11.290 Attached to 10.0.0.2 00:23:11.290 Registering asynchronous event callbacks... 00:23:11.290 Starting namespace attribute notice tests for all controllers... 00:23:11.290 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:23:11.290 aer_cb - Changed Namespace 00:23:11.290 Cleaning up... 00:23:11.290 [ 00:23:11.290 { 00:23:11.290 "allow_any_host": true, 00:23:11.290 "hosts": [], 00:23:11.290 "listen_addresses": [], 00:23:11.290 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:11.290 "subtype": "Discovery" 00:23:11.290 }, 00:23:11.290 { 00:23:11.290 "allow_any_host": true, 00:23:11.290 "hosts": [], 00:23:11.290 "listen_addresses": [ 00:23:11.290 { 00:23:11.290 "adrfam": "IPv4", 00:23:11.290 "traddr": "10.0.0.2", 00:23:11.290 "trsvcid": "4420", 00:23:11.290 "trtype": "TCP" 00:23:11.290 } 00:23:11.290 ], 00:23:11.290 "max_cntlid": 65519, 00:23:11.290 "max_namespaces": 2, 00:23:11.290 "min_cntlid": 1, 00:23:11.290 "model_number": "SPDK bdev Controller", 00:23:11.290 "namespaces": [ 00:23:11.290 { 00:23:11.290 "bdev_name": "Malloc0", 00:23:11.290 "name": "Malloc0", 00:23:11.290 "nguid": "5AFC1ACC1166424FB5417EFEDD289A63", 00:23:11.290 "nsid": 1, 00:23:11.290 "uuid": "5afc1acc-1166-424f-b541-7efedd289a63" 00:23:11.290 }, 00:23:11.290 { 00:23:11.290 "bdev_name": "Malloc1", 00:23:11.290 "name": "Malloc1", 00:23:11.290 "nguid": "ACDB0A71174F479B91B4598F13A593EC", 00:23:11.290 "nsid": 2, 00:23:11.290 "uuid": "acdb0a71-174f-479b-91b4-598f13a593ec" 00:23:11.290 } 00:23:11.290 ], 00:23:11.290 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:11.290 "serial_number": "SPDK00000000000001", 00:23:11.290 "subtype": "NVMe" 00:23:11.290 } 00:23:11.290 ] 00:23:11.290 20:23:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.290 20:23:00 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 104154 00:23:11.290 20:23:00 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:23:11.290 20:23:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.290 20:23:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:11.290 20:23:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.290 20:23:00 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:23:11.290 20:23:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.290 20:23:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:11.290 20:23:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.290 20:23:00 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:11.290 20:23:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.290 20:23:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:11.290 20:23:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.290 20:23:00 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:23:11.291 20:23:00 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:23:11.291 20:23:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:11.291 20:23:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:23:11.548 20:23:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:11.548 20:23:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:23:11.548 20:23:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:11.548 20:23:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:11.548 rmmod nvme_tcp 00:23:11.548 rmmod nvme_fabrics 00:23:11.548 rmmod nvme_keyring 00:23:11.548 20:23:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:11.548 20:23:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:23:11.548 20:23:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:23:11.548 20:23:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 104100 ']' 00:23:11.548 20:23:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 104100 00:23:11.548 20:23:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@946 -- # '[' -z 104100 ']' 00:23:11.548 20:23:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@950 -- # kill -0 104100 00:23:11.548 20:23:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # uname 00:23:11.548 20:23:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:11.548 20:23:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 104100 00:23:11.548 killing process with pid 104100 00:23:11.548 20:23:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:11.548 20:23:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:11.548 20:23:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@964 -- # echo 'killing process with pid 104100' 00:23:11.548 20:23:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@965 -- # kill 104100 00:23:11.548 20:23:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@970 -- # wait 104100 00:23:11.806 20:23:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:11.806 20:23:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:11.806 20:23:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:11.806 20:23:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:11.806 20:23:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:11.806 20:23:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:11.806 20:23:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:11.806 20:23:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:11.806 20:23:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:11.806 00:23:11.806 real 0m2.505s 00:23:11.806 user 0m6.915s 00:23:11.806 sys 0m0.688s 00:23:11.806 20:23:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:11.806 ************************************ 00:23:11.806 END TEST nvmf_aer 00:23:11.806 ************************************ 00:23:11.806 20:23:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:11.806 20:23:00 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:11.806 20:23:00 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:11.806 20:23:00 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:11.806 20:23:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:11.806 ************************************ 00:23:11.806 START TEST nvmf_async_init 00:23:11.806 ************************************ 00:23:11.807 20:23:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:11.807 * Looking for test storage... 00:23:11.807 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:11.807 20:23:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:11.807 20:23:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:23:12.065 20:23:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:12.065 20:23:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:12.065 20:23:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:12.065 20:23:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:12.065 20:23:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:12.065 20:23:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:12.065 20:23:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:12.065 20:23:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:12.065 20:23:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:12.065 20:23:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:12.065 20:23:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:23:12.065 20:23:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:23:12.065 20:23:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:12.065 20:23:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:12.065 20:23:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:12.065 20:23:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:12.065 20:23:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:12.065 20:23:00 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:12.065 20:23:00 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:12.065 20:23:00 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:12.065 20:23:00 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:12.065 20:23:00 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:12.065 20:23:00 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:12.065 20:23:00 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:23:12.065 20:23:00 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:12.065 20:23:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:23:12.065 20:23:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:12.065 20:23:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:12.065 20:23:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:12.065 20:23:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:12.065 20:23:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:12.065 20:23:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:12.065 20:23:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:12.065 20:23:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:12.065 20:23:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:23:12.065 20:23:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:23:12.065 20:23:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:23:12.065 20:23:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:23:12.065 20:23:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:23:12.065 20:23:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:23:12.065 20:23:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=ec5610b9e4be4aacb56541bbf5593192 00:23:12.065 20:23:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:23:12.065 20:23:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:12.065 20:23:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:12.065 20:23:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:12.065 20:23:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:12.065 20:23:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:12.065 20:23:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:12.065 20:23:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:12.065 20:23:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:12.065 20:23:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:23:12.065 20:23:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:23:12.065 20:23:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:23:12.065 20:23:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:23:12.065 20:23:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:23:12.065 20:23:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@432 -- # nvmf_veth_init 00:23:12.065 20:23:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:12.066 20:23:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:12.066 20:23:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:12.066 20:23:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:12.066 20:23:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:12.066 20:23:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:12.066 20:23:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:12.066 20:23:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:12.066 20:23:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:12.066 20:23:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:12.066 20:23:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:12.066 20:23:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:12.066 20:23:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:12.066 20:23:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:12.066 Cannot find device "nvmf_tgt_br" 00:23:12.066 20:23:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@155 -- # true 00:23:12.066 20:23:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:12.066 Cannot find device "nvmf_tgt_br2" 00:23:12.066 20:23:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@156 -- # true 00:23:12.066 20:23:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:12.066 20:23:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:12.066 Cannot find device "nvmf_tgt_br" 00:23:12.066 20:23:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@158 -- # true 00:23:12.066 20:23:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:12.066 Cannot find device "nvmf_tgt_br2" 00:23:12.066 20:23:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@159 -- # true 00:23:12.066 20:23:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:12.066 20:23:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:12.066 20:23:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:12.066 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:12.066 20:23:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@162 -- # true 00:23:12.066 20:23:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:12.066 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:12.066 20:23:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@163 -- # true 00:23:12.066 20:23:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:12.066 20:23:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:12.066 20:23:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:12.066 20:23:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:12.066 20:23:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:12.066 20:23:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:12.066 20:23:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:12.066 20:23:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:12.066 20:23:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:12.066 20:23:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:12.066 20:23:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:12.066 20:23:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:12.325 20:23:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:12.325 20:23:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:12.325 20:23:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:12.325 20:23:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:12.325 20:23:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:12.325 20:23:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:12.325 20:23:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:12.325 20:23:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:12.325 20:23:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:12.325 20:23:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:12.325 20:23:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:12.325 20:23:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:12.325 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:12.325 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:23:12.325 00:23:12.325 --- 10.0.0.2 ping statistics --- 00:23:12.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:12.325 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:23:12.325 20:23:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:12.325 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:12.325 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.083 ms 00:23:12.325 00:23:12.325 --- 10.0.0.3 ping statistics --- 00:23:12.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:12.325 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:23:12.325 20:23:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:12.325 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:12.325 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:23:12.325 00:23:12.325 --- 10.0.0.1 ping statistics --- 00:23:12.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:12.325 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:23:12.325 20:23:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:12.325 20:23:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@433 -- # return 0 00:23:12.325 20:23:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:12.325 20:23:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:12.325 20:23:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:12.325 20:23:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:12.325 20:23:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:12.325 20:23:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:12.325 20:23:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:12.325 20:23:01 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:23:12.325 20:23:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:12.325 20:23:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:12.325 20:23:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:12.325 20:23:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=104321 00:23:12.325 20:23:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:12.325 20:23:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 104321 00:23:12.325 20:23:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@827 -- # '[' -z 104321 ']' 00:23:12.325 20:23:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:12.325 20:23:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:12.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:12.325 20:23:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:12.325 20:23:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:12.325 20:23:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:12.325 [2024-07-14 20:23:01.328680] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:12.325 [2024-07-14 20:23:01.328779] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:12.583 [2024-07-14 20:23:01.464959] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:12.583 [2024-07-14 20:23:01.547287] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:12.583 [2024-07-14 20:23:01.547370] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:12.583 [2024-07-14 20:23:01.547398] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:12.583 [2024-07-14 20:23:01.547407] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:12.583 [2024-07-14 20:23:01.547414] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:12.583 [2024-07-14 20:23:01.547440] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:13.519 20:23:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:13.519 20:23:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@860 -- # return 0 00:23:13.519 20:23:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:13.519 20:23:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:13.519 20:23:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:13.519 20:23:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:13.519 20:23:02 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:23:13.519 20:23:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.519 20:23:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:13.519 [2024-07-14 20:23:02.326736] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:13.519 20:23:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.519 20:23:02 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:23:13.519 20:23:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.519 20:23:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:13.519 null0 00:23:13.519 20:23:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.519 20:23:02 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:23:13.519 20:23:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.519 20:23:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:13.519 20:23:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.520 20:23:02 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:23:13.520 20:23:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.520 20:23:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:13.520 20:23:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.520 20:23:02 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g ec5610b9e4be4aacb56541bbf5593192 00:23:13.520 20:23:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.520 20:23:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:13.520 20:23:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.520 20:23:02 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:13.520 20:23:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.520 20:23:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:13.520 [2024-07-14 20:23:02.366836] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:13.520 20:23:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.520 20:23:02 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:23:13.520 20:23:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.520 20:23:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:13.520 nvme0n1 00:23:13.520 20:23:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.520 20:23:02 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:13.520 20:23:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.520 20:23:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:13.779 [ 00:23:13.779 { 00:23:13.779 "aliases": [ 00:23:13.779 "ec5610b9-e4be-4aac-b565-41bbf5593192" 00:23:13.779 ], 00:23:13.779 "assigned_rate_limits": { 00:23:13.779 "r_mbytes_per_sec": 0, 00:23:13.779 "rw_ios_per_sec": 0, 00:23:13.779 "rw_mbytes_per_sec": 0, 00:23:13.779 "w_mbytes_per_sec": 0 00:23:13.779 }, 00:23:13.779 "block_size": 512, 00:23:13.779 "claimed": false, 00:23:13.779 "driver_specific": { 00:23:13.779 "mp_policy": "active_passive", 00:23:13.779 "nvme": [ 00:23:13.779 { 00:23:13.779 "ctrlr_data": { 00:23:13.779 "ana_reporting": false, 00:23:13.779 "cntlid": 1, 00:23:13.779 "firmware_revision": "24.05.1", 00:23:13.779 "model_number": "SPDK bdev Controller", 00:23:13.779 "multi_ctrlr": true, 00:23:13.779 "oacs": { 00:23:13.779 "firmware": 0, 00:23:13.779 "format": 0, 00:23:13.779 "ns_manage": 0, 00:23:13.779 "security": 0 00:23:13.779 }, 00:23:13.779 "serial_number": "00000000000000000000", 00:23:13.779 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:13.779 "vendor_id": "0x8086" 00:23:13.779 }, 00:23:13.779 "ns_data": { 00:23:13.779 "can_share": true, 00:23:13.779 "id": 1 00:23:13.779 }, 00:23:13.779 "trid": { 00:23:13.779 "adrfam": "IPv4", 00:23:13.779 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:13.779 "traddr": "10.0.0.2", 00:23:13.779 "trsvcid": "4420", 00:23:13.779 "trtype": "TCP" 00:23:13.779 }, 00:23:13.779 "vs": { 00:23:13.779 "nvme_version": "1.3" 00:23:13.779 } 00:23:13.779 } 00:23:13.779 ] 00:23:13.779 }, 00:23:13.779 "memory_domains": [ 00:23:13.779 { 00:23:13.779 "dma_device_id": "system", 00:23:13.779 "dma_device_type": 1 00:23:13.779 } 00:23:13.779 ], 00:23:13.779 "name": "nvme0n1", 00:23:13.779 "num_blocks": 2097152, 00:23:13.779 "product_name": "NVMe disk", 00:23:13.779 "supported_io_types": { 00:23:13.779 "abort": true, 00:23:13.779 "compare": true, 00:23:13.779 "compare_and_write": true, 00:23:13.779 "flush": true, 00:23:13.779 "nvme_admin": true, 00:23:13.779 "nvme_io": true, 00:23:13.779 "read": true, 00:23:13.779 "reset": true, 00:23:13.779 "unmap": false, 00:23:13.779 "write": true, 00:23:13.779 "write_zeroes": true 00:23:13.779 }, 00:23:13.779 "uuid": "ec5610b9-e4be-4aac-b565-41bbf5593192", 00:23:13.779 "zoned": false 00:23:13.779 } 00:23:13.779 ] 00:23:13.779 20:23:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.779 20:23:02 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:23:13.779 20:23:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.779 20:23:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:13.779 [2024-07-14 20:23:02.630794] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:13.779 [2024-07-14 20:23:02.630945] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13e01b0 (9): Bad file descriptor 00:23:13.779 [2024-07-14 20:23:02.763048] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:13.779 20:23:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.779 20:23:02 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:13.779 20:23:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.779 20:23:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:13.779 [ 00:23:13.779 { 00:23:13.779 "aliases": [ 00:23:13.779 "ec5610b9-e4be-4aac-b565-41bbf5593192" 00:23:13.779 ], 00:23:13.779 "assigned_rate_limits": { 00:23:13.779 "r_mbytes_per_sec": 0, 00:23:13.779 "rw_ios_per_sec": 0, 00:23:13.779 "rw_mbytes_per_sec": 0, 00:23:13.779 "w_mbytes_per_sec": 0 00:23:13.779 }, 00:23:13.779 "block_size": 512, 00:23:13.779 "claimed": false, 00:23:13.779 "driver_specific": { 00:23:13.779 "mp_policy": "active_passive", 00:23:13.779 "nvme": [ 00:23:13.779 { 00:23:13.779 "ctrlr_data": { 00:23:13.779 "ana_reporting": false, 00:23:13.779 "cntlid": 2, 00:23:13.779 "firmware_revision": "24.05.1", 00:23:13.779 "model_number": "SPDK bdev Controller", 00:23:13.779 "multi_ctrlr": true, 00:23:13.779 "oacs": { 00:23:13.779 "firmware": 0, 00:23:13.779 "format": 0, 00:23:13.779 "ns_manage": 0, 00:23:13.779 "security": 0 00:23:13.779 }, 00:23:13.779 "serial_number": "00000000000000000000", 00:23:13.779 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:13.779 "vendor_id": "0x8086" 00:23:13.779 }, 00:23:13.779 "ns_data": { 00:23:13.779 "can_share": true, 00:23:13.779 "id": 1 00:23:13.779 }, 00:23:13.779 "trid": { 00:23:13.779 "adrfam": "IPv4", 00:23:13.779 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:13.779 "traddr": "10.0.0.2", 00:23:13.779 "trsvcid": "4420", 00:23:13.779 "trtype": "TCP" 00:23:13.779 }, 00:23:13.779 "vs": { 00:23:13.779 "nvme_version": "1.3" 00:23:13.779 } 00:23:13.779 } 00:23:13.779 ] 00:23:13.779 }, 00:23:13.779 "memory_domains": [ 00:23:13.779 { 00:23:13.779 "dma_device_id": "system", 00:23:13.780 "dma_device_type": 1 00:23:13.780 } 00:23:13.780 ], 00:23:13.780 "name": "nvme0n1", 00:23:13.780 "num_blocks": 2097152, 00:23:13.780 "product_name": "NVMe disk", 00:23:13.780 "supported_io_types": { 00:23:13.780 "abort": true, 00:23:13.780 "compare": true, 00:23:13.780 "compare_and_write": true, 00:23:13.780 "flush": true, 00:23:13.780 "nvme_admin": true, 00:23:13.780 "nvme_io": true, 00:23:13.780 "read": true, 00:23:13.780 "reset": true, 00:23:13.780 "unmap": false, 00:23:13.780 "write": true, 00:23:13.780 "write_zeroes": true 00:23:13.780 }, 00:23:13.780 "uuid": "ec5610b9-e4be-4aac-b565-41bbf5593192", 00:23:13.780 "zoned": false 00:23:13.780 } 00:23:13.780 ] 00:23:13.780 20:23:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.780 20:23:02 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:13.780 20:23:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.780 20:23:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:13.780 20:23:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.780 20:23:02 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:23:13.780 20:23:02 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.MyGlaHDOgB 00:23:13.780 20:23:02 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:13.780 20:23:02 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.MyGlaHDOgB 00:23:13.780 20:23:02 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:23:13.780 20:23:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.780 20:23:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:13.780 20:23:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.780 20:23:02 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:23:13.780 20:23:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.780 20:23:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:13.780 [2024-07-14 20:23:02.834973] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:13.780 [2024-07-14 20:23:02.835180] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:13.780 20:23:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.780 20:23:02 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.MyGlaHDOgB 00:23:13.780 20:23:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.780 20:23:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:13.780 [2024-07-14 20:23:02.842966] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:13.780 20:23:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.780 20:23:02 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.MyGlaHDOgB 00:23:13.780 20:23:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.780 20:23:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:13.780 [2024-07-14 20:23:02.850947] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:13.780 [2024-07-14 20:23:02.851032] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:14.039 nvme0n1 00:23:14.039 20:23:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.039 20:23:02 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:14.039 20:23:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.039 20:23:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:14.039 [ 00:23:14.039 { 00:23:14.039 "aliases": [ 00:23:14.039 "ec5610b9-e4be-4aac-b565-41bbf5593192" 00:23:14.039 ], 00:23:14.039 "assigned_rate_limits": { 00:23:14.039 "r_mbytes_per_sec": 0, 00:23:14.039 "rw_ios_per_sec": 0, 00:23:14.039 "rw_mbytes_per_sec": 0, 00:23:14.039 "w_mbytes_per_sec": 0 00:23:14.039 }, 00:23:14.039 "block_size": 512, 00:23:14.039 "claimed": false, 00:23:14.039 "driver_specific": { 00:23:14.039 "mp_policy": "active_passive", 00:23:14.039 "nvme": [ 00:23:14.039 { 00:23:14.039 "ctrlr_data": { 00:23:14.039 "ana_reporting": false, 00:23:14.039 "cntlid": 3, 00:23:14.039 "firmware_revision": "24.05.1", 00:23:14.039 "model_number": "SPDK bdev Controller", 00:23:14.039 "multi_ctrlr": true, 00:23:14.039 "oacs": { 00:23:14.039 "firmware": 0, 00:23:14.039 "format": 0, 00:23:14.039 "ns_manage": 0, 00:23:14.039 "security": 0 00:23:14.039 }, 00:23:14.039 "serial_number": "00000000000000000000", 00:23:14.039 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:14.039 "vendor_id": "0x8086" 00:23:14.039 }, 00:23:14.039 "ns_data": { 00:23:14.039 "can_share": true, 00:23:14.039 "id": 1 00:23:14.039 }, 00:23:14.039 "trid": { 00:23:14.039 "adrfam": "IPv4", 00:23:14.039 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:14.039 "traddr": "10.0.0.2", 00:23:14.039 "trsvcid": "4421", 00:23:14.039 "trtype": "TCP" 00:23:14.039 }, 00:23:14.039 "vs": { 00:23:14.039 "nvme_version": "1.3" 00:23:14.039 } 00:23:14.039 } 00:23:14.039 ] 00:23:14.039 }, 00:23:14.039 "memory_domains": [ 00:23:14.039 { 00:23:14.039 "dma_device_id": "system", 00:23:14.039 "dma_device_type": 1 00:23:14.039 } 00:23:14.039 ], 00:23:14.039 "name": "nvme0n1", 00:23:14.039 "num_blocks": 2097152, 00:23:14.039 "product_name": "NVMe disk", 00:23:14.039 "supported_io_types": { 00:23:14.039 "abort": true, 00:23:14.039 "compare": true, 00:23:14.039 "compare_and_write": true, 00:23:14.039 "flush": true, 00:23:14.039 "nvme_admin": true, 00:23:14.039 "nvme_io": true, 00:23:14.039 "read": true, 00:23:14.039 "reset": true, 00:23:14.039 "unmap": false, 00:23:14.039 "write": true, 00:23:14.039 "write_zeroes": true 00:23:14.039 }, 00:23:14.039 "uuid": "ec5610b9-e4be-4aac-b565-41bbf5593192", 00:23:14.039 "zoned": false 00:23:14.039 } 00:23:14.039 ] 00:23:14.039 20:23:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.039 20:23:02 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:14.039 20:23:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.039 20:23:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:14.039 20:23:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.039 20:23:02 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.MyGlaHDOgB 00:23:14.039 20:23:02 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:23:14.039 20:23:02 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:23:14.039 20:23:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:14.039 20:23:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:23:14.039 20:23:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:14.039 20:23:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:23:14.039 20:23:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:14.039 20:23:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:14.039 rmmod nvme_tcp 00:23:14.039 rmmod nvme_fabrics 00:23:14.039 rmmod nvme_keyring 00:23:14.039 20:23:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:14.039 20:23:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:23:14.039 20:23:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:23:14.039 20:23:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 104321 ']' 00:23:14.039 20:23:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 104321 00:23:14.039 20:23:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@946 -- # '[' -z 104321 ']' 00:23:14.039 20:23:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@950 -- # kill -0 104321 00:23:14.039 20:23:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # uname 00:23:14.039 20:23:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:14.039 20:23:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 104321 00:23:14.039 20:23:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:14.039 killing process with pid 104321 00:23:14.039 20:23:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:14.039 20:23:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 104321' 00:23:14.039 20:23:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@965 -- # kill 104321 00:23:14.040 [2024-07-14 20:23:03.122815] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:14.040 [2024-07-14 20:23:03.122886] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:14.040 20:23:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@970 -- # wait 104321 00:23:14.299 20:23:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:14.299 20:23:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:14.299 20:23:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:14.299 20:23:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:14.299 20:23:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:14.299 20:23:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:14.299 20:23:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:14.299 20:23:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:14.299 20:23:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:14.299 ************************************ 00:23:14.299 END TEST nvmf_async_init 00:23:14.299 ************************************ 00:23:14.299 00:23:14.299 real 0m2.552s 00:23:14.299 user 0m2.379s 00:23:14.299 sys 0m0.615s 00:23:14.299 20:23:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:14.299 20:23:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:14.557 20:23:03 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:14.557 20:23:03 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:14.557 20:23:03 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:14.557 20:23:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:14.557 ************************************ 00:23:14.557 START TEST dma 00:23:14.557 ************************************ 00:23:14.557 20:23:03 nvmf_tcp.dma -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:14.557 * Looking for test storage... 00:23:14.557 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:14.557 20:23:03 nvmf_tcp.dma -- host/dma.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:14.557 20:23:03 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:23:14.557 20:23:03 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:14.557 20:23:03 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:14.557 20:23:03 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:14.557 20:23:03 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:14.557 20:23:03 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:14.557 20:23:03 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:14.557 20:23:03 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:14.557 20:23:03 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:14.557 20:23:03 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:14.557 20:23:03 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:14.557 20:23:03 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:23:14.557 20:23:03 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:23:14.557 20:23:03 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:14.557 20:23:03 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:14.557 20:23:03 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:14.557 20:23:03 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:14.557 20:23:03 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:14.557 20:23:03 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:14.557 20:23:03 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:14.557 20:23:03 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:14.557 20:23:03 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.557 20:23:03 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.557 20:23:03 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.557 20:23:03 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:23:14.557 20:23:03 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.557 20:23:03 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:23:14.557 20:23:03 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:14.557 20:23:03 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:14.557 20:23:03 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:14.557 20:23:03 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:14.557 20:23:03 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:14.557 20:23:03 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:14.557 20:23:03 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:14.557 20:23:03 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:14.557 20:23:03 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:23:14.557 20:23:03 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:23:14.557 ************************************ 00:23:14.557 END TEST dma 00:23:14.557 ************************************ 00:23:14.557 00:23:14.557 real 0m0.106s 00:23:14.557 user 0m0.045s 00:23:14.557 sys 0m0.067s 00:23:14.557 20:23:03 nvmf_tcp.dma -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:14.557 20:23:03 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:23:14.557 20:23:03 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:14.557 20:23:03 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:14.557 20:23:03 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:14.557 20:23:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:14.557 ************************************ 00:23:14.558 START TEST nvmf_identify 00:23:14.558 ************************************ 00:23:14.558 20:23:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:14.816 * Looking for test storage... 00:23:14.816 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:14.816 20:23:03 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:14.816 20:23:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:23:14.816 20:23:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:14.816 20:23:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:14.816 20:23:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:14.816 20:23:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:14.816 20:23:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:14.816 20:23:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:14.816 20:23:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:14.816 20:23:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:14.816 20:23:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:14.816 20:23:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:14.816 20:23:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:23:14.816 20:23:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:23:14.816 20:23:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:14.816 20:23:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:14.816 20:23:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:14.816 20:23:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:14.816 20:23:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:14.816 20:23:03 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:14.816 20:23:03 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:14.816 20:23:03 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:14.816 20:23:03 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.816 20:23:03 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.816 20:23:03 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.816 20:23:03 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:23:14.816 20:23:03 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.816 20:23:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:23:14.816 20:23:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:14.816 20:23:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:14.816 20:23:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:14.817 20:23:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:14.817 20:23:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:14.817 20:23:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:14.817 20:23:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:14.817 20:23:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:14.817 20:23:03 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:14.817 20:23:03 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:14.817 20:23:03 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:23:14.817 20:23:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:14.817 20:23:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:14.817 20:23:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:14.817 20:23:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:14.817 20:23:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:14.817 20:23:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:14.817 20:23:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:14.817 20:23:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:14.817 20:23:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:23:14.817 20:23:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:23:14.817 20:23:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:23:14.817 20:23:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:23:14.817 20:23:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:23:14.817 20:23:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@432 -- # nvmf_veth_init 00:23:14.817 20:23:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:14.817 20:23:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:14.817 20:23:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:14.817 20:23:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:14.817 20:23:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:14.817 20:23:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:14.817 20:23:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:14.817 20:23:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:14.817 20:23:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:14.817 20:23:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:14.817 20:23:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:14.817 20:23:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:14.817 20:23:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:14.817 20:23:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:14.817 Cannot find device "nvmf_tgt_br" 00:23:14.817 20:23:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # true 00:23:14.817 20:23:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:14.817 Cannot find device "nvmf_tgt_br2" 00:23:14.817 20:23:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # true 00:23:14.817 20:23:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:14.817 20:23:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:14.817 Cannot find device "nvmf_tgt_br" 00:23:14.817 20:23:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # true 00:23:14.817 20:23:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:14.817 Cannot find device "nvmf_tgt_br2" 00:23:14.817 20:23:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # true 00:23:14.817 20:23:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:14.817 20:23:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:14.817 20:23:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:14.817 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:14.817 20:23:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # true 00:23:14.817 20:23:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:14.817 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:14.817 20:23:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # true 00:23:14.817 20:23:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:14.817 20:23:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:14.817 20:23:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:14.817 20:23:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:14.817 20:23:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:15.075 20:23:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:15.075 20:23:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:15.075 20:23:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:15.075 20:23:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:15.075 20:23:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:15.075 20:23:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:15.075 20:23:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:15.075 20:23:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:15.075 20:23:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:15.075 20:23:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:15.075 20:23:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:15.075 20:23:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:15.075 20:23:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:15.075 20:23:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:15.075 20:23:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:15.075 20:23:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:15.075 20:23:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:15.075 20:23:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:15.075 20:23:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:15.075 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:15.075 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:23:15.075 00:23:15.075 --- 10.0.0.2 ping statistics --- 00:23:15.075 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:15.075 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:23:15.075 20:23:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:15.075 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:15.075 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.086 ms 00:23:15.075 00:23:15.075 --- 10.0.0.3 ping statistics --- 00:23:15.075 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:15.075 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:23:15.075 20:23:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:15.075 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:15.075 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:23:15.075 00:23:15.075 --- 10.0.0.1 ping statistics --- 00:23:15.075 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:15.075 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:23:15.075 20:23:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:15.075 20:23:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@433 -- # return 0 00:23:15.075 20:23:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:15.075 20:23:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:15.075 20:23:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:15.075 20:23:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:15.075 20:23:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:15.075 20:23:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:15.075 20:23:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:15.075 20:23:04 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:23:15.075 20:23:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:15.075 20:23:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:15.075 20:23:04 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=104592 00:23:15.075 20:23:04 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:15.075 20:23:04 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:15.075 20:23:04 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 104592 00:23:15.075 20:23:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@827 -- # '[' -z 104592 ']' 00:23:15.075 20:23:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:15.075 20:23:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:15.075 20:23:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:15.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:15.075 20:23:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:15.075 20:23:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:15.075 [2024-07-14 20:23:04.157162] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:15.076 [2024-07-14 20:23:04.157312] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:15.334 [2024-07-14 20:23:04.303966] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:15.335 [2024-07-14 20:23:04.396559] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:15.335 [2024-07-14 20:23:04.396609] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:15.335 [2024-07-14 20:23:04.396620] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:15.335 [2024-07-14 20:23:04.396628] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:15.335 [2024-07-14 20:23:04.396634] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:15.335 [2024-07-14 20:23:04.396796] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:15.335 [2024-07-14 20:23:04.397590] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:15.335 [2024-07-14 20:23:04.397734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:15.335 [2024-07-14 20:23:04.397825] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:16.265 20:23:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:16.265 20:23:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@860 -- # return 0 00:23:16.265 20:23:05 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:16.265 20:23:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.265 20:23:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:16.265 [2024-07-14 20:23:05.171550] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:16.265 20:23:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.266 20:23:05 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:23:16.266 20:23:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:16.266 20:23:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:16.266 20:23:05 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:16.266 20:23:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.266 20:23:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:16.266 Malloc0 00:23:16.266 20:23:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.266 20:23:05 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:16.266 20:23:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.266 20:23:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:16.266 20:23:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.266 20:23:05 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:23:16.266 20:23:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.266 20:23:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:16.266 20:23:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.266 20:23:05 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:16.266 20:23:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.266 20:23:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:16.266 [2024-07-14 20:23:05.281197] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:16.266 20:23:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.266 20:23:05 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:16.266 20:23:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.266 20:23:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:16.266 20:23:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.266 20:23:05 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:23:16.266 20:23:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.266 20:23:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:16.266 [ 00:23:16.266 { 00:23:16.266 "allow_any_host": true, 00:23:16.266 "hosts": [], 00:23:16.266 "listen_addresses": [ 00:23:16.266 { 00:23:16.266 "adrfam": "IPv4", 00:23:16.266 "traddr": "10.0.0.2", 00:23:16.266 "trsvcid": "4420", 00:23:16.266 "trtype": "TCP" 00:23:16.266 } 00:23:16.266 ], 00:23:16.266 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:16.266 "subtype": "Discovery" 00:23:16.266 }, 00:23:16.266 { 00:23:16.266 "allow_any_host": true, 00:23:16.266 "hosts": [], 00:23:16.266 "listen_addresses": [ 00:23:16.266 { 00:23:16.266 "adrfam": "IPv4", 00:23:16.266 "traddr": "10.0.0.2", 00:23:16.266 "trsvcid": "4420", 00:23:16.266 "trtype": "TCP" 00:23:16.266 } 00:23:16.266 ], 00:23:16.266 "max_cntlid": 65519, 00:23:16.266 "max_namespaces": 32, 00:23:16.266 "min_cntlid": 1, 00:23:16.266 "model_number": "SPDK bdev Controller", 00:23:16.266 "namespaces": [ 00:23:16.266 { 00:23:16.266 "bdev_name": "Malloc0", 00:23:16.266 "eui64": "ABCDEF0123456789", 00:23:16.266 "name": "Malloc0", 00:23:16.266 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:23:16.266 "nsid": 1, 00:23:16.266 "uuid": "4290adc5-7a7a-4c05-996b-e07c052d19b1" 00:23:16.266 } 00:23:16.266 ], 00:23:16.266 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:16.266 "serial_number": "SPDK00000000000001", 00:23:16.266 "subtype": "NVMe" 00:23:16.266 } 00:23:16.266 ] 00:23:16.266 20:23:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.266 20:23:05 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:23:16.266 [2024-07-14 20:23:05.331793] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:16.266 [2024-07-14 20:23:05.331850] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104645 ] 00:23:16.527 [2024-07-14 20:23:05.466652] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:23:16.527 [2024-07-14 20:23:05.466737] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:16.527 [2024-07-14 20:23:05.466744] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:16.527 [2024-07-14 20:23:05.466759] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:16.527 [2024-07-14 20:23:05.466770] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:16.527 [2024-07-14 20:23:05.466967] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:23:16.527 [2024-07-14 20:23:05.467020] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x205a580 0 00:23:16.527 [2024-07-14 20:23:05.471914] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:16.527 [2024-07-14 20:23:05.471938] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:16.527 [2024-07-14 20:23:05.471960] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:16.527 [2024-07-14 20:23:05.471964] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:16.528 [2024-07-14 20:23:05.472012] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:16.528 [2024-07-14 20:23:05.472020] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:16.528 [2024-07-14 20:23:05.472026] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x205a580) 00:23:16.528 [2024-07-14 20:23:05.472041] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:16.528 [2024-07-14 20:23:05.472073] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a66c0, cid 0, qid 0 00:23:16.528 [2024-07-14 20:23:05.479873] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:16.528 [2024-07-14 20:23:05.479897] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:16.528 [2024-07-14 20:23:05.479918] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:16.528 [2024-07-14 20:23:05.479924] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20a66c0) on tqpair=0x205a580 00:23:16.528 [2024-07-14 20:23:05.479940] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:16.528 [2024-07-14 20:23:05.479949] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:23:16.528 [2024-07-14 20:23:05.479955] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:23:16.528 [2024-07-14 20:23:05.479974] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:16.528 [2024-07-14 20:23:05.479979] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:16.528 [2024-07-14 20:23:05.479983] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x205a580) 00:23:16.528 [2024-07-14 20:23:05.479993] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.528 [2024-07-14 20:23:05.480023] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a66c0, cid 0, qid 0 00:23:16.528 [2024-07-14 20:23:05.480091] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:16.528 [2024-07-14 20:23:05.480098] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:16.528 [2024-07-14 20:23:05.480101] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:16.528 [2024-07-14 20:23:05.480105] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20a66c0) on tqpair=0x205a580 00:23:16.528 [2024-07-14 20:23:05.480112] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:23:16.528 [2024-07-14 20:23:05.480135] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:23:16.528 [2024-07-14 20:23:05.480143] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:16.528 [2024-07-14 20:23:05.480147] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:16.528 [2024-07-14 20:23:05.480151] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x205a580) 00:23:16.528 [2024-07-14 20:23:05.480159] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.528 [2024-07-14 20:23:05.480180] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a66c0, cid 0, qid 0 00:23:16.528 [2024-07-14 20:23:05.480231] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:16.528 [2024-07-14 20:23:05.480238] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:16.528 [2024-07-14 20:23:05.480241] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:16.528 [2024-07-14 20:23:05.480245] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20a66c0) on tqpair=0x205a580 00:23:16.528 [2024-07-14 20:23:05.480252] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:23:16.528 [2024-07-14 20:23:05.480261] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:23:16.528 [2024-07-14 20:23:05.480268] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:16.528 [2024-07-14 20:23:05.480272] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:16.528 [2024-07-14 20:23:05.480276] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x205a580) 00:23:16.528 [2024-07-14 20:23:05.480284] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.528 [2024-07-14 20:23:05.480310] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a66c0, cid 0, qid 0 00:23:16.528 [2024-07-14 20:23:05.480373] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:16.528 [2024-07-14 20:23:05.480379] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:16.528 [2024-07-14 20:23:05.480383] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:16.528 [2024-07-14 20:23:05.480387] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20a66c0) on tqpair=0x205a580 00:23:16.528 [2024-07-14 20:23:05.480394] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:16.528 [2024-07-14 20:23:05.480405] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:16.528 [2024-07-14 20:23:05.480410] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:16.528 [2024-07-14 20:23:05.480414] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x205a580) 00:23:16.528 [2024-07-14 20:23:05.480421] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.528 [2024-07-14 20:23:05.480441] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a66c0, cid 0, qid 0 00:23:16.528 [2024-07-14 20:23:05.480492] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:16.528 [2024-07-14 20:23:05.480499] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:16.528 [2024-07-14 20:23:05.480503] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:16.528 [2024-07-14 20:23:05.480507] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20a66c0) on tqpair=0x205a580 00:23:16.528 [2024-07-14 20:23:05.480514] nvme_ctrlr.c:3751:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:23:16.528 [2024-07-14 20:23:05.480519] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:23:16.528 [2024-07-14 20:23:05.480527] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:16.528 [2024-07-14 20:23:05.480633] nvme_ctrlr.c:3944:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:23:16.528 [2024-07-14 20:23:05.480638] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:16.528 [2024-07-14 20:23:05.480648] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:16.528 [2024-07-14 20:23:05.480653] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:16.528 [2024-07-14 20:23:05.480657] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x205a580) 00:23:16.528 [2024-07-14 20:23:05.480665] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.528 [2024-07-14 20:23:05.480700] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a66c0, cid 0, qid 0 00:23:16.528 [2024-07-14 20:23:05.480762] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:16.528 [2024-07-14 20:23:05.480769] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:16.528 [2024-07-14 20:23:05.480773] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:16.528 [2024-07-14 20:23:05.480777] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20a66c0) on tqpair=0x205a580 00:23:16.528 [2024-07-14 20:23:05.480784] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:16.528 [2024-07-14 20:23:05.480794] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:16.528 [2024-07-14 20:23:05.480799] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:16.528 [2024-07-14 20:23:05.480803] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x205a580) 00:23:16.528 [2024-07-14 20:23:05.480810] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.528 [2024-07-14 20:23:05.480830] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a66c0, cid 0, qid 0 00:23:16.528 [2024-07-14 20:23:05.480910] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:16.528 [2024-07-14 20:23:05.480919] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:16.528 [2024-07-14 20:23:05.480923] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:16.528 [2024-07-14 20:23:05.480927] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20a66c0) on tqpair=0x205a580 00:23:16.528 [2024-07-14 20:23:05.480933] nvme_ctrlr.c:3786:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:16.528 [2024-07-14 20:23:05.480938] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:23:16.528 [2024-07-14 20:23:05.480946] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:23:16.528 [2024-07-14 20:23:05.480957] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:23:16.528 [2024-07-14 20:23:05.480967] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:16.528 [2024-07-14 20:23:05.480972] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x205a580) 00:23:16.528 [2024-07-14 20:23:05.480980] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.528 [2024-07-14 20:23:05.481003] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a66c0, cid 0, qid 0 00:23:16.528 [2024-07-14 20:23:05.481106] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:16.528 [2024-07-14 20:23:05.481113] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:16.528 [2024-07-14 20:23:05.481117] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:16.528 [2024-07-14 20:23:05.481121] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x205a580): datao=0, datal=4096, cccid=0 00:23:16.528 [2024-07-14 20:23:05.481126] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20a66c0) on tqpair(0x205a580): expected_datao=0, payload_size=4096 00:23:16.528 [2024-07-14 20:23:05.481131] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:16.528 [2024-07-14 20:23:05.481139] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:16.528 [2024-07-14 20:23:05.481144] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:16.528 [2024-07-14 20:23:05.481152] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:16.528 [2024-07-14 20:23:05.481158] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:16.528 [2024-07-14 20:23:05.481162] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:16.528 [2024-07-14 20:23:05.481166] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20a66c0) on tqpair=0x205a580 00:23:16.528 [2024-07-14 20:23:05.481175] nvme_ctrlr.c:1986:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:23:16.528 [2024-07-14 20:23:05.481180] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:23:16.528 [2024-07-14 20:23:05.481185] nvme_ctrlr.c:1993:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:23:16.528 [2024-07-14 20:23:05.481190] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:23:16.528 [2024-07-14 20:23:05.481195] nvme_ctrlr.c:2032:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:23:16.529 [2024-07-14 20:23:05.481201] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:23:16.529 [2024-07-14 20:23:05.481214] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:23:16.529 [2024-07-14 20:23:05.481223] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:16.529 [2024-07-14 20:23:05.481227] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:16.529 [2024-07-14 20:23:05.481231] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x205a580) 00:23:16.529 [2024-07-14 20:23:05.481239] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:16.529 [2024-07-14 20:23:05.481261] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a66c0, cid 0, qid 0 00:23:16.529 [2024-07-14 20:23:05.481337] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:16.529 [2024-07-14 20:23:05.481344] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:16.529 [2024-07-14 20:23:05.481347] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:16.529 [2024-07-14 20:23:05.481351] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20a66c0) on tqpair=0x205a580 00:23:16.529 [2024-07-14 20:23:05.481361] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:16.529 [2024-07-14 20:23:05.481365] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:16.529 [2024-07-14 20:23:05.481369] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x205a580) 00:23:16.529 [2024-07-14 20:23:05.481376] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:16.529 [2024-07-14 20:23:05.481382] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:16.529 [2024-07-14 20:23:05.481386] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:16.529 [2024-07-14 20:23:05.481390] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x205a580) 00:23:16.529 [2024-07-14 20:23:05.481396] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:16.529 [2024-07-14 20:23:05.481402] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:16.529 [2024-07-14 20:23:05.481406] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:16.529 [2024-07-14 20:23:05.481410] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x205a580) 00:23:16.529 [2024-07-14 20:23:05.481416] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:16.529 [2024-07-14 20:23:05.481422] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:16.529 [2024-07-14 20:23:05.481426] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:16.529 [2024-07-14 20:23:05.481430] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x205a580) 00:23:16.529 [2024-07-14 20:23:05.481436] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:16.529 [2024-07-14 20:23:05.481441] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:23:16.529 [2024-07-14 20:23:05.481450] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:16.529 [2024-07-14 20:23:05.481457] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:16.529 [2024-07-14 20:23:05.481461] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x205a580) 00:23:16.529 [2024-07-14 20:23:05.481468] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.529 [2024-07-14 20:23:05.481494] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a66c0, cid 0, qid 0 00:23:16.529 [2024-07-14 20:23:05.481502] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a6820, cid 1, qid 0 00:23:16.529 [2024-07-14 20:23:05.481507] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a6980, cid 2, qid 0 00:23:16.529 [2024-07-14 20:23:05.481512] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a6ae0, cid 3, qid 0 00:23:16.529 [2024-07-14 20:23:05.481517] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a6c40, cid 4, qid 0 00:23:16.529 [2024-07-14 20:23:05.481603] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:16.529 [2024-07-14 20:23:05.481610] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:16.529 [2024-07-14 20:23:05.481613] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:16.529 [2024-07-14 20:23:05.481617] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20a6c40) on tqpair=0x205a580 00:23:16.529 [2024-07-14 20:23:05.481624] nvme_ctrlr.c:2904:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:23:16.529 [2024-07-14 20:23:05.481629] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:23:16.529 [2024-07-14 20:23:05.481640] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:16.529 [2024-07-14 20:23:05.481645] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x205a580) 00:23:16.529 [2024-07-14 20:23:05.481652] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.529 [2024-07-14 20:23:05.481672] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a6c40, cid 4, qid 0 00:23:16.529 [2024-07-14 20:23:05.481740] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:16.529 [2024-07-14 20:23:05.481761] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:16.529 [2024-07-14 20:23:05.481765] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:16.529 [2024-07-14 20:23:05.481769] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x205a580): datao=0, datal=4096, cccid=4 00:23:16.529 [2024-07-14 20:23:05.481774] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20a6c40) on tqpair(0x205a580): expected_datao=0, payload_size=4096 00:23:16.529 [2024-07-14 20:23:05.481779] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:16.529 [2024-07-14 20:23:05.481786] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:16.529 [2024-07-14 20:23:05.481791] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:16.529 [2024-07-14 20:23:05.481799] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:16.529 [2024-07-14 20:23:05.481805] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:16.529 [2024-07-14 20:23:05.481809] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:16.529 [2024-07-14 20:23:05.481813] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20a6c40) on tqpair=0x205a580 00:23:16.529 [2024-07-14 20:23:05.481827] nvme_ctrlr.c:4038:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:23:16.529 [2024-07-14 20:23:05.481898] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:16.529 [2024-07-14 20:23:05.481909] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x205a580) 00:23:16.529 [2024-07-14 20:23:05.481918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.529 [2024-07-14 20:23:05.481926] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:16.529 [2024-07-14 20:23:05.481930] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:16.529 [2024-07-14 20:23:05.481934] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x205a580) 00:23:16.529 [2024-07-14 20:23:05.481940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:16.529 [2024-07-14 20:23:05.481974] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a6c40, cid 4, qid 0 00:23:16.529 [2024-07-14 20:23:05.481983] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a6da0, cid 5, qid 0 00:23:16.529 [2024-07-14 20:23:05.482093] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:16.529 [2024-07-14 20:23:05.482110] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:16.529 [2024-07-14 20:23:05.482115] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:16.529 [2024-07-14 20:23:05.482119] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x205a580): datao=0, datal=1024, cccid=4 00:23:16.529 [2024-07-14 20:23:05.482124] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20a6c40) on tqpair(0x205a580): expected_datao=0, payload_size=1024 00:23:16.529 [2024-07-14 20:23:05.482129] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:16.529 [2024-07-14 20:23:05.482136] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:16.529 [2024-07-14 20:23:05.482140] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:16.529 [2024-07-14 20:23:05.482146] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:16.529 [2024-07-14 20:23:05.482152] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:16.529 [2024-07-14 20:23:05.482156] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:16.529 [2024-07-14 20:23:05.482160] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20a6da0) on tqpair=0x205a580 00:23:16.529 [2024-07-14 20:23:05.527887] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:16.529 [2024-07-14 20:23:05.527914] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:16.529 [2024-07-14 20:23:05.527936] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:16.529 [2024-07-14 20:23:05.527940] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20a6c40) on tqpair=0x205a580 00:23:16.529 [2024-07-14 20:23:05.527964] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:16.529 [2024-07-14 20:23:05.527970] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x205a580) 00:23:16.529 [2024-07-14 20:23:05.527979] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.529 [2024-07-14 20:23:05.528016] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a6c40, cid 4, qid 0 00:23:16.529 [2024-07-14 20:23:05.528105] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:16.529 [2024-07-14 20:23:05.528112] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:16.529 [2024-07-14 20:23:05.528115] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:16.529 [2024-07-14 20:23:05.528119] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x205a580): datao=0, datal=3072, cccid=4 00:23:16.529 [2024-07-14 20:23:05.528123] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20a6c40) on tqpair(0x205a580): expected_datao=0, payload_size=3072 00:23:16.529 [2024-07-14 20:23:05.528128] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:16.529 [2024-07-14 20:23:05.528135] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:16.529 [2024-07-14 20:23:05.528155] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:16.529 [2024-07-14 20:23:05.528164] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:16.529 [2024-07-14 20:23:05.528186] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:16.529 [2024-07-14 20:23:05.528189] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:16.529 [2024-07-14 20:23:05.528193] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20a6c40) on tqpair=0x205a580 00:23:16.529 [2024-07-14 20:23:05.528205] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:16.529 [2024-07-14 20:23:05.528210] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x205a580) 00:23:16.529 [2024-07-14 20:23:05.528217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.530 [2024-07-14 20:23:05.528258] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a6c40, cid 4, qid 0 00:23:16.530 [2024-07-14 20:23:05.528343] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:16.530 [2024-07-14 20:23:05.528350] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:16.530 [2024-07-14 20:23:05.528353] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:16.530 [2024-07-14 20:23:05.528357] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x205a580): datao=0, datal=8, cccid=4 00:23:16.530 [2024-07-14 20:23:05.528362] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20a6c40) on tqpair(0x205a580): expected_datao=0, payload_size=8 00:23:16.530 [2024-07-14 20:23:05.528367] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:16.530 [2024-07-14 20:23:05.528373] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:16.530 [2024-07-14 20:23:05.528377] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:16.530 [2024-07-14 20:23:05.569983] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:16.530 [2024-07-14 20:23:05.570011] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:16.530 [2024-07-14 20:23:05.570033] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:16.530 [2024-07-14 20:23:05.570037] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20a6c40) on tqpair=0x205a580 00:23:16.530 ===================================================== 00:23:16.530 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:16.530 ===================================================== 00:23:16.530 Controller Capabilities/Features 00:23:16.530 ================================ 00:23:16.530 Vendor ID: 0000 00:23:16.530 Subsystem Vendor ID: 0000 00:23:16.530 Serial Number: .................... 00:23:16.530 Model Number: ........................................ 00:23:16.530 Firmware Version: 24.05.1 00:23:16.530 Recommended Arb Burst: 0 00:23:16.530 IEEE OUI Identifier: 00 00 00 00:23:16.530 Multi-path I/O 00:23:16.530 May have multiple subsystem ports: No 00:23:16.530 May have multiple controllers: No 00:23:16.530 Associated with SR-IOV VF: No 00:23:16.530 Max Data Transfer Size: 131072 00:23:16.530 Max Number of Namespaces: 0 00:23:16.530 Max Number of I/O Queues: 1024 00:23:16.530 NVMe Specification Version (VS): 1.3 00:23:16.530 NVMe Specification Version (Identify): 1.3 00:23:16.530 Maximum Queue Entries: 128 00:23:16.530 Contiguous Queues Required: Yes 00:23:16.530 Arbitration Mechanisms Supported 00:23:16.530 Weighted Round Robin: Not Supported 00:23:16.530 Vendor Specific: Not Supported 00:23:16.530 Reset Timeout: 15000 ms 00:23:16.530 Doorbell Stride: 4 bytes 00:23:16.530 NVM Subsystem Reset: Not Supported 00:23:16.530 Command Sets Supported 00:23:16.530 NVM Command Set: Supported 00:23:16.530 Boot Partition: Not Supported 00:23:16.530 Memory Page Size Minimum: 4096 bytes 00:23:16.530 Memory Page Size Maximum: 4096 bytes 00:23:16.530 Persistent Memory Region: Not Supported 00:23:16.530 Optional Asynchronous Events Supported 00:23:16.530 Namespace Attribute Notices: Not Supported 00:23:16.530 Firmware Activation Notices: Not Supported 00:23:16.530 ANA Change Notices: Not Supported 00:23:16.530 PLE Aggregate Log Change Notices: Not Supported 00:23:16.530 LBA Status Info Alert Notices: Not Supported 00:23:16.530 EGE Aggregate Log Change Notices: Not Supported 00:23:16.530 Normal NVM Subsystem Shutdown event: Not Supported 00:23:16.530 Zone Descriptor Change Notices: Not Supported 00:23:16.530 Discovery Log Change Notices: Supported 00:23:16.530 Controller Attributes 00:23:16.530 128-bit Host Identifier: Not Supported 00:23:16.530 Non-Operational Permissive Mode: Not Supported 00:23:16.530 NVM Sets: Not Supported 00:23:16.530 Read Recovery Levels: Not Supported 00:23:16.530 Endurance Groups: Not Supported 00:23:16.530 Predictable Latency Mode: Not Supported 00:23:16.530 Traffic Based Keep ALive: Not Supported 00:23:16.530 Namespace Granularity: Not Supported 00:23:16.530 SQ Associations: Not Supported 00:23:16.530 UUID List: Not Supported 00:23:16.530 Multi-Domain Subsystem: Not Supported 00:23:16.530 Fixed Capacity Management: Not Supported 00:23:16.530 Variable Capacity Management: Not Supported 00:23:16.530 Delete Endurance Group: Not Supported 00:23:16.530 Delete NVM Set: Not Supported 00:23:16.530 Extended LBA Formats Supported: Not Supported 00:23:16.530 Flexible Data Placement Supported: Not Supported 00:23:16.530 00:23:16.530 Controller Memory Buffer Support 00:23:16.530 ================================ 00:23:16.530 Supported: No 00:23:16.530 00:23:16.530 Persistent Memory Region Support 00:23:16.530 ================================ 00:23:16.530 Supported: No 00:23:16.530 00:23:16.530 Admin Command Set Attributes 00:23:16.530 ============================ 00:23:16.530 Security Send/Receive: Not Supported 00:23:16.530 Format NVM: Not Supported 00:23:16.530 Firmware Activate/Download: Not Supported 00:23:16.530 Namespace Management: Not Supported 00:23:16.530 Device Self-Test: Not Supported 00:23:16.530 Directives: Not Supported 00:23:16.530 NVMe-MI: Not Supported 00:23:16.530 Virtualization Management: Not Supported 00:23:16.530 Doorbell Buffer Config: Not Supported 00:23:16.530 Get LBA Status Capability: Not Supported 00:23:16.530 Command & Feature Lockdown Capability: Not Supported 00:23:16.530 Abort Command Limit: 1 00:23:16.530 Async Event Request Limit: 4 00:23:16.530 Number of Firmware Slots: N/A 00:23:16.530 Firmware Slot 1 Read-Only: N/A 00:23:16.530 Firmware Activation Without Reset: N/A 00:23:16.530 Multiple Update Detection Support: N/A 00:23:16.530 Firmware Update Granularity: No Information Provided 00:23:16.530 Per-Namespace SMART Log: No 00:23:16.530 Asymmetric Namespace Access Log Page: Not Supported 00:23:16.530 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:16.530 Command Effects Log Page: Not Supported 00:23:16.530 Get Log Page Extended Data: Supported 00:23:16.530 Telemetry Log Pages: Not Supported 00:23:16.530 Persistent Event Log Pages: Not Supported 00:23:16.530 Supported Log Pages Log Page: May Support 00:23:16.530 Commands Supported & Effects Log Page: Not Supported 00:23:16.530 Feature Identifiers & Effects Log Page:May Support 00:23:16.530 NVMe-MI Commands & Effects Log Page: May Support 00:23:16.530 Data Area 4 for Telemetry Log: Not Supported 00:23:16.530 Error Log Page Entries Supported: 128 00:23:16.530 Keep Alive: Not Supported 00:23:16.530 00:23:16.530 NVM Command Set Attributes 00:23:16.530 ========================== 00:23:16.530 Submission Queue Entry Size 00:23:16.530 Max: 1 00:23:16.530 Min: 1 00:23:16.530 Completion Queue Entry Size 00:23:16.530 Max: 1 00:23:16.530 Min: 1 00:23:16.530 Number of Namespaces: 0 00:23:16.530 Compare Command: Not Supported 00:23:16.530 Write Uncorrectable Command: Not Supported 00:23:16.530 Dataset Management Command: Not Supported 00:23:16.530 Write Zeroes Command: Not Supported 00:23:16.530 Set Features Save Field: Not Supported 00:23:16.530 Reservations: Not Supported 00:23:16.530 Timestamp: Not Supported 00:23:16.530 Copy: Not Supported 00:23:16.530 Volatile Write Cache: Not Present 00:23:16.530 Atomic Write Unit (Normal): 1 00:23:16.530 Atomic Write Unit (PFail): 1 00:23:16.530 Atomic Compare & Write Unit: 1 00:23:16.530 Fused Compare & Write: Supported 00:23:16.530 Scatter-Gather List 00:23:16.530 SGL Command Set: Supported 00:23:16.530 SGL Keyed: Supported 00:23:16.530 SGL Bit Bucket Descriptor: Not Supported 00:23:16.530 SGL Metadata Pointer: Not Supported 00:23:16.530 Oversized SGL: Not Supported 00:23:16.530 SGL Metadata Address: Not Supported 00:23:16.530 SGL Offset: Supported 00:23:16.530 Transport SGL Data Block: Not Supported 00:23:16.530 Replay Protected Memory Block: Not Supported 00:23:16.530 00:23:16.530 Firmware Slot Information 00:23:16.530 ========================= 00:23:16.530 Active slot: 0 00:23:16.530 00:23:16.530 00:23:16.530 Error Log 00:23:16.530 ========= 00:23:16.530 00:23:16.530 Active Namespaces 00:23:16.530 ================= 00:23:16.530 Discovery Log Page 00:23:16.530 ================== 00:23:16.530 Generation Counter: 2 00:23:16.530 Number of Records: 2 00:23:16.530 Record Format: 0 00:23:16.530 00:23:16.530 Discovery Log Entry 0 00:23:16.530 ---------------------- 00:23:16.530 Transport Type: 3 (TCP) 00:23:16.530 Address Family: 1 (IPv4) 00:23:16.530 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:16.530 Entry Flags: 00:23:16.530 Duplicate Returned Information: 1 00:23:16.530 Explicit Persistent Connection Support for Discovery: 1 00:23:16.530 Transport Requirements: 00:23:16.530 Secure Channel: Not Required 00:23:16.530 Port ID: 0 (0x0000) 00:23:16.530 Controller ID: 65535 (0xffff) 00:23:16.530 Admin Max SQ Size: 128 00:23:16.530 Transport Service Identifier: 4420 00:23:16.530 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:16.530 Transport Address: 10.0.0.2 00:23:16.530 Discovery Log Entry 1 00:23:16.530 ---------------------- 00:23:16.530 Transport Type: 3 (TCP) 00:23:16.530 Address Family: 1 (IPv4) 00:23:16.531 Subsystem Type: 2 (NVM Subsystem) 00:23:16.531 Entry Flags: 00:23:16.531 Duplicate Returned Information: 0 00:23:16.531 Explicit Persistent Connection Support for Discovery: 0 00:23:16.531 Transport Requirements: 00:23:16.531 Secure Channel: Not Required 00:23:16.531 Port ID: 0 (0x0000) 00:23:16.531 Controller ID: 65535 (0xffff) 00:23:16.531 Admin Max SQ Size: 128 00:23:16.531 Transport Service Identifier: 4420 00:23:16.531 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:23:16.531 Transport Address: 10.0.0.2 [2024-07-14 20:23:05.570214] nvme_ctrlr.c:4234:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:23:16.531 [2024-07-14 20:23:05.570236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.531 [2024-07-14 20:23:05.570245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.531 [2024-07-14 20:23:05.570251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.531 [2024-07-14 20:23:05.570257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.531 [2024-07-14 20:23:05.570272] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:16.531 [2024-07-14 20:23:05.570277] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:16.531 [2024-07-14 20:23:05.570281] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x205a580) 00:23:16.531 [2024-07-14 20:23:05.570290] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.531 [2024-07-14 20:23:05.570335] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a6ae0, cid 3, qid 0 00:23:16.531 [2024-07-14 20:23:05.570433] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:16.531 [2024-07-14 20:23:05.570440] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:16.531 [2024-07-14 20:23:05.570444] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:16.531 [2024-07-14 20:23:05.570448] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20a6ae0) on tqpair=0x205a580 00:23:16.531 [2024-07-14 20:23:05.570458] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:16.531 [2024-07-14 20:23:05.570462] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:16.531 [2024-07-14 20:23:05.570466] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x205a580) 00:23:16.531 [2024-07-14 20:23:05.570474] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.531 [2024-07-14 20:23:05.570499] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a6ae0, cid 3, qid 0 00:23:16.531 [2024-07-14 20:23:05.570583] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:16.531 [2024-07-14 20:23:05.570589] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:16.531 [2024-07-14 20:23:05.570593] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:16.531 [2024-07-14 20:23:05.570597] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20a6ae0) on tqpair=0x205a580 00:23:16.531 [2024-07-14 20:23:05.570608] nvme_ctrlr.c:1084:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:23:16.531 [2024-07-14 20:23:05.570614] nvme_ctrlr.c:1087:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:23:16.531 [2024-07-14 20:23:05.570625] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:16.531 [2024-07-14 20:23:05.570629] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:16.531 [2024-07-14 20:23:05.570633] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x205a580) 00:23:16.531 [2024-07-14 20:23:05.570641] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.531 [2024-07-14 20:23:05.570662] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a6ae0, cid 3, qid 0 00:23:16.531 [2024-07-14 20:23:05.570731] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:16.531 [2024-07-14 20:23:05.570737] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:16.531 [2024-07-14 20:23:05.570741] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:16.531 [2024-07-14 20:23:05.570745] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20a6ae0) on tqpair=0x205a580 00:23:16.531 [2024-07-14 20:23:05.570757] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:16.531 [2024-07-14 20:23:05.570762] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:16.531 [2024-07-14 20:23:05.570765] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x205a580) 00:23:16.531 [2024-07-14 20:23:05.570773] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.531 [2024-07-14 20:23:05.570792] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a6ae0, cid 3, qid 0 00:23:16.531 [2024-07-14 20:23:05.570844] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:16.531 [2024-07-14 20:23:05.570850] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:16.531 [2024-07-14 20:23:05.570854] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:16.531 [2024-07-14 20:23:05.570858] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20a6ae0) on tqpair=0x205a580 00:23:16.531 [2024-07-14 20:23:05.570934] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:16.531 [2024-07-14 20:23:05.570941] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:16.531 [2024-07-14 20:23:05.570945] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x205a580) 00:23:16.531 [2024-07-14 20:23:05.570953] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.531 [2024-07-14 20:23:05.570977] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a6ae0, cid 3, qid 0 00:23:16.531 [2024-07-14 20:23:05.571034] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:16.531 [2024-07-14 20:23:05.571041] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:16.531 [2024-07-14 20:23:05.571045] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:16.531 [2024-07-14 20:23:05.571049] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20a6ae0) on tqpair=0x205a580 00:23:16.531 [2024-07-14 20:23:05.571061] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:16.531 [2024-07-14 20:23:05.571066] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:16.531 [2024-07-14 20:23:05.571070] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x205a580) 00:23:16.531 [2024-07-14 20:23:05.571077] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.531 [2024-07-14 20:23:05.571097] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a6ae0, cid 3, qid 0 00:23:16.531 [2024-07-14 20:23:05.571154] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:16.531 [2024-07-14 20:23:05.571161] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:16.531 [2024-07-14 20:23:05.571165] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:16.531 [2024-07-14 20:23:05.571169] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20a6ae0) on tqpair=0x205a580 00:23:16.531 [2024-07-14 20:23:05.571181] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:16.531 [2024-07-14 20:23:05.571186] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:16.531 [2024-07-14 20:23:05.571190] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x205a580) 00:23:16.531 [2024-07-14 20:23:05.571197] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.531 [2024-07-14 20:23:05.571217] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a6ae0, cid 3, qid 0 00:23:16.531 [2024-07-14 20:23:05.571294] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:16.531 [2024-07-14 20:23:05.571301] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:16.531 [2024-07-14 20:23:05.571305] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:16.531 [2024-07-14 20:23:05.571309] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20a6ae0) on tqpair=0x205a580 00:23:16.531 [2024-07-14 20:23:05.571320] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:16.531 [2024-07-14 20:23:05.571325] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:16.531 [2024-07-14 20:23:05.571329] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x205a580) 00:23:16.531 [2024-07-14 20:23:05.571336] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.531 [2024-07-14 20:23:05.571355] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a6ae0, cid 3, qid 0 00:23:16.531 [2024-07-14 20:23:05.571404] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:16.531 [2024-07-14 20:23:05.571411] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:16.531 [2024-07-14 20:23:05.571415] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:16.531 [2024-07-14 20:23:05.571419] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20a6ae0) on tqpair=0x205a580 00:23:16.531 [2024-07-14 20:23:05.571430] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:16.531 [2024-07-14 20:23:05.571434] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:16.531 [2024-07-14 20:23:05.571438] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x205a580) 00:23:16.531 [2024-07-14 20:23:05.571446] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.531 [2024-07-14 20:23:05.571465] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a6ae0, cid 3, qid 0 00:23:16.531 [2024-07-14 20:23:05.571518] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:16.531 [2024-07-14 20:23:05.571525] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:16.531 [2024-07-14 20:23:05.571528] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:16.531 [2024-07-14 20:23:05.571532] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20a6ae0) on tqpair=0x205a580 00:23:16.531 [2024-07-14 20:23:05.571544] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:16.531 [2024-07-14 20:23:05.571548] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:16.531 [2024-07-14 20:23:05.571552] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x205a580) 00:23:16.531 [2024-07-14 20:23:05.571560] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.531 [2024-07-14 20:23:05.571579] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a6ae0, cid 3, qid 0 00:23:16.531 [2024-07-14 20:23:05.571635] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:16.531 [2024-07-14 20:23:05.571641] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:16.531 [2024-07-14 20:23:05.571645] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:16.531 [2024-07-14 20:23:05.571649] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20a6ae0) on tqpair=0x205a580 00:23:16.531 [2024-07-14 20:23:05.571660] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:16.531 [2024-07-14 20:23:05.571665] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:16.531 [2024-07-14 20:23:05.571669] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x205a580) 00:23:16.531 [2024-07-14 20:23:05.571676] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.531 [2024-07-14 20:23:05.571695] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a6ae0, cid 3, qid 0 00:23:16.531 [2024-07-14 20:23:05.571748] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:16.532 [2024-07-14 20:23:05.571754] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:16.532 [2024-07-14 20:23:05.571758] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:16.532 [2024-07-14 20:23:05.571762] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20a6ae0) on tqpair=0x205a580 00:23:16.532 [2024-07-14 20:23:05.571774] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:16.532 [2024-07-14 20:23:05.571778] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:16.532 [2024-07-14 20:23:05.571782] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x205a580) 00:23:16.532 [2024-07-14 20:23:05.571790] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.532 [2024-07-14 20:23:05.571809] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a6ae0, cid 3, qid 0 00:23:16.532 [2024-07-14 20:23:05.571881] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:16.532 [2024-07-14 20:23:05.572351] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:16.532 [2024-07-14 20:23:05.572360] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:16.532 [2024-07-14 20:23:05.572365] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20a6ae0) on tqpair=0x205a580 00:23:16.532 [2024-07-14 20:23:05.572382] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:16.532 [2024-07-14 20:23:05.572388] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:16.532 [2024-07-14 20:23:05.572392] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x205a580) 00:23:16.532 [2024-07-14 20:23:05.572400] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.532 [2024-07-14 20:23:05.572431] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20a6ae0, cid 3, qid 0 00:23:16.532 [2024-07-14 20:23:05.572503] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:16.532 [2024-07-14 20:23:05.572510] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:16.532 [2024-07-14 20:23:05.572514] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:16.532 [2024-07-14 20:23:05.572518] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20a6ae0) on tqpair=0x205a580 00:23:16.532 [2024-07-14 20:23:05.572528] nvme_ctrlr.c:1206:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 1 milliseconds 00:23:16.532 00:23:16.532 20:23:05 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:23:16.795 [2024-07-14 20:23:05.616446] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:16.795 [2024-07-14 20:23:05.616489] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104651 ] 00:23:16.795 [2024-07-14 20:23:05.753974] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:23:16.795 [2024-07-14 20:23:05.754057] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:16.795 [2024-07-14 20:23:05.754064] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:16.795 [2024-07-14 20:23:05.754080] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:16.795 [2024-07-14 20:23:05.754091] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:16.795 [2024-07-14 20:23:05.754242] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:23:16.795 [2024-07-14 20:23:05.754292] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1587580 0 00:23:16.795 [2024-07-14 20:23:05.758918] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:16.795 [2024-07-14 20:23:05.758944] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:16.795 [2024-07-14 20:23:05.758966] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:16.795 [2024-07-14 20:23:05.758970] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:16.795 [2024-07-14 20:23:05.759018] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:16.795 [2024-07-14 20:23:05.759026] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:16.795 [2024-07-14 20:23:05.759030] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1587580) 00:23:16.795 [2024-07-14 20:23:05.759046] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:16.795 [2024-07-14 20:23:05.759080] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15d36c0, cid 0, qid 0 00:23:16.795 [2024-07-14 20:23:05.766889] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:16.795 [2024-07-14 20:23:05.766917] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:16.795 [2024-07-14 20:23:05.766938] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:16.795 [2024-07-14 20:23:05.766943] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15d36c0) on tqpair=0x1587580 00:23:16.795 [2024-07-14 20:23:05.766955] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:16.795 [2024-07-14 20:23:05.766964] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:23:16.795 [2024-07-14 20:23:05.766970] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:23:16.795 [2024-07-14 20:23:05.766988] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:16.795 [2024-07-14 20:23:05.766994] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:16.795 [2024-07-14 20:23:05.766998] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1587580) 00:23:16.795 [2024-07-14 20:23:05.767007] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.795 [2024-07-14 20:23:05.767037] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15d36c0, cid 0, qid 0 00:23:16.795 [2024-07-14 20:23:05.767115] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:16.795 [2024-07-14 20:23:05.767122] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:16.795 [2024-07-14 20:23:05.767125] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:16.795 [2024-07-14 20:23:05.767129] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15d36c0) on tqpair=0x1587580 00:23:16.795 [2024-07-14 20:23:05.767152] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:23:16.795 [2024-07-14 20:23:05.767176] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:23:16.795 [2024-07-14 20:23:05.767184] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:16.795 [2024-07-14 20:23:05.767188] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:16.795 [2024-07-14 20:23:05.767192] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1587580) 00:23:16.795 [2024-07-14 20:23:05.767200] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.795 [2024-07-14 20:23:05.767221] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15d36c0, cid 0, qid 0 00:23:16.795 [2024-07-14 20:23:05.767298] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:16.795 [2024-07-14 20:23:05.767305] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:16.795 [2024-07-14 20:23:05.767309] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:16.795 [2024-07-14 20:23:05.767313] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15d36c0) on tqpair=0x1587580 00:23:16.795 [2024-07-14 20:23:05.767320] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:23:16.795 [2024-07-14 20:23:05.767329] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:23:16.795 [2024-07-14 20:23:05.767336] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:16.795 [2024-07-14 20:23:05.767340] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:16.795 [2024-07-14 20:23:05.767344] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1587580) 00:23:16.795 [2024-07-14 20:23:05.767352] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.795 [2024-07-14 20:23:05.767373] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15d36c0, cid 0, qid 0 00:23:16.795 [2024-07-14 20:23:05.767426] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:16.795 [2024-07-14 20:23:05.767433] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:16.795 [2024-07-14 20:23:05.767437] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:16.795 [2024-07-14 20:23:05.767441] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15d36c0) on tqpair=0x1587580 00:23:16.795 [2024-07-14 20:23:05.767448] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:16.795 [2024-07-14 20:23:05.767458] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:16.795 [2024-07-14 20:23:05.767463] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:16.795 [2024-07-14 20:23:05.767467] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1587580) 00:23:16.795 [2024-07-14 20:23:05.767474] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.795 [2024-07-14 20:23:05.767494] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15d36c0, cid 0, qid 0 00:23:16.795 [2024-07-14 20:23:05.767553] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:16.795 [2024-07-14 20:23:05.767560] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:16.795 [2024-07-14 20:23:05.767564] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:16.795 [2024-07-14 20:23:05.767568] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15d36c0) on tqpair=0x1587580 00:23:16.795 [2024-07-14 20:23:05.767574] nvme_ctrlr.c:3751:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:23:16.795 [2024-07-14 20:23:05.767580] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:23:16.795 [2024-07-14 20:23:05.767588] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:16.795 [2024-07-14 20:23:05.767694] nvme_ctrlr.c:3944:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:23:16.795 [2024-07-14 20:23:05.767698] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:16.795 [2024-07-14 20:23:05.767708] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:16.795 [2024-07-14 20:23:05.767712] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:16.795 [2024-07-14 20:23:05.767716] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1587580) 00:23:16.795 [2024-07-14 20:23:05.767724] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.795 [2024-07-14 20:23:05.767745] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15d36c0, cid 0, qid 0 00:23:16.795 [2024-07-14 20:23:05.767803] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:16.795 [2024-07-14 20:23:05.767809] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:16.795 [2024-07-14 20:23:05.767813] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:16.795 [2024-07-14 20:23:05.767817] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15d36c0) on tqpair=0x1587580 00:23:16.795 [2024-07-14 20:23:05.767824] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:16.795 [2024-07-14 20:23:05.767834] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:16.795 [2024-07-14 20:23:05.767839] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:16.795 [2024-07-14 20:23:05.767843] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1587580) 00:23:16.795 [2024-07-14 20:23:05.767850] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.795 [2024-07-14 20:23:05.767870] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15d36c0, cid 0, qid 0 00:23:16.795 [2024-07-14 20:23:05.767941] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:16.795 [2024-07-14 20:23:05.767950] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:16.795 [2024-07-14 20:23:05.767954] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:16.795 [2024-07-14 20:23:05.767958] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15d36c0) on tqpair=0x1587580 00:23:16.796 [2024-07-14 20:23:05.767964] nvme_ctrlr.c:3786:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:16.796 [2024-07-14 20:23:05.767970] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:23:16.796 [2024-07-14 20:23:05.767978] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:23:16.796 [2024-07-14 20:23:05.767989] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:23:16.796 [2024-07-14 20:23:05.767999] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:16.796 [2024-07-14 20:23:05.768004] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1587580) 00:23:16.796 [2024-07-14 20:23:05.768012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.796 [2024-07-14 20:23:05.768036] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15d36c0, cid 0, qid 0 00:23:16.796 [2024-07-14 20:23:05.768139] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:16.796 [2024-07-14 20:23:05.768146] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:16.796 [2024-07-14 20:23:05.768150] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:16.796 [2024-07-14 20:23:05.768154] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1587580): datao=0, datal=4096, cccid=0 00:23:16.796 [2024-07-14 20:23:05.768160] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15d36c0) on tqpair(0x1587580): expected_datao=0, payload_size=4096 00:23:16.796 [2024-07-14 20:23:05.768164] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:16.796 [2024-07-14 20:23:05.768173] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:16.796 [2024-07-14 20:23:05.768178] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:16.796 [2024-07-14 20:23:05.768187] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:16.796 [2024-07-14 20:23:05.768193] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:16.796 [2024-07-14 20:23:05.768197] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:16.796 [2024-07-14 20:23:05.768201] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15d36c0) on tqpair=0x1587580 00:23:16.796 [2024-07-14 20:23:05.768210] nvme_ctrlr.c:1986:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:23:16.796 [2024-07-14 20:23:05.768216] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:23:16.796 [2024-07-14 20:23:05.768221] nvme_ctrlr.c:1993:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:23:16.796 [2024-07-14 20:23:05.768226] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:23:16.796 [2024-07-14 20:23:05.768231] nvme_ctrlr.c:2032:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:23:16.796 [2024-07-14 20:23:05.768236] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:23:16.796 [2024-07-14 20:23:05.768250] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:23:16.796 [2024-07-14 20:23:05.768259] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:16.796 [2024-07-14 20:23:05.768264] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:16.796 [2024-07-14 20:23:05.768268] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1587580) 00:23:16.796 [2024-07-14 20:23:05.768276] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:16.796 [2024-07-14 20:23:05.768298] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15d36c0, cid 0, qid 0 00:23:16.796 [2024-07-14 20:23:05.768360] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:16.796 [2024-07-14 20:23:05.768367] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:16.796 [2024-07-14 20:23:05.768371] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:16.796 [2024-07-14 20:23:05.768375] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15d36c0) on tqpair=0x1587580 00:23:16.796 [2024-07-14 20:23:05.768384] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:16.796 [2024-07-14 20:23:05.768389] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:16.796 [2024-07-14 20:23:05.768392] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1587580) 00:23:16.796 [2024-07-14 20:23:05.768399] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:16.796 [2024-07-14 20:23:05.768406] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:16.796 [2024-07-14 20:23:05.768410] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:16.796 [2024-07-14 20:23:05.768413] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1587580) 00:23:16.796 [2024-07-14 20:23:05.768419] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:16.796 [2024-07-14 20:23:05.768426] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:16.796 [2024-07-14 20:23:05.768430] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:16.796 [2024-07-14 20:23:05.768434] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1587580) 00:23:16.796 [2024-07-14 20:23:05.768439] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:16.796 [2024-07-14 20:23:05.768446] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:16.796 [2024-07-14 20:23:05.768449] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:16.796 [2024-07-14 20:23:05.768453] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1587580) 00:23:16.796 [2024-07-14 20:23:05.768459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:16.796 [2024-07-14 20:23:05.768464] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:23:16.796 [2024-07-14 20:23:05.768473] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:16.796 [2024-07-14 20:23:05.768480] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:16.796 [2024-07-14 20:23:05.768484] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1587580) 00:23:16.796 [2024-07-14 20:23:05.768491] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.796 [2024-07-14 20:23:05.768519] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15d36c0, cid 0, qid 0 00:23:16.796 [2024-07-14 20:23:05.768526] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15d3820, cid 1, qid 0 00:23:16.796 [2024-07-14 20:23:05.768531] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15d3980, cid 2, qid 0 00:23:16.796 [2024-07-14 20:23:05.768536] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15d3ae0, cid 3, qid 0 00:23:16.796 [2024-07-14 20:23:05.768541] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15d3c40, cid 4, qid 0 00:23:16.796 [2024-07-14 20:23:05.768630] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:16.796 [2024-07-14 20:23:05.768637] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:16.796 [2024-07-14 20:23:05.768640] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:16.796 [2024-07-14 20:23:05.768644] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15d3c40) on tqpair=0x1587580 00:23:16.796 [2024-07-14 20:23:05.768651] nvme_ctrlr.c:2904:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:23:16.796 [2024-07-14 20:23:05.768657] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:23:16.796 [2024-07-14 20:23:05.768666] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:23:16.796 [2024-07-14 20:23:05.768672] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:23:16.796 [2024-07-14 20:23:05.768679] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:16.796 [2024-07-14 20:23:05.768684] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:16.796 [2024-07-14 20:23:05.768687] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1587580) 00:23:16.796 [2024-07-14 20:23:05.768695] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:16.796 [2024-07-14 20:23:05.768715] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15d3c40, cid 4, qid 0 00:23:16.796 [2024-07-14 20:23:05.768778] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:16.796 [2024-07-14 20:23:05.768784] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:16.796 [2024-07-14 20:23:05.768788] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:16.796 [2024-07-14 20:23:05.768792] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15d3c40) on tqpair=0x1587580 00:23:16.796 [2024-07-14 20:23:05.768871] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:23:16.796 [2024-07-14 20:23:05.768886] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:23:16.796 [2024-07-14 20:23:05.768895] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:16.796 [2024-07-14 20:23:05.768900] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1587580) 00:23:16.796 [2024-07-14 20:23:05.768907] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.796 [2024-07-14 20:23:05.768931] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15d3c40, cid 4, qid 0 00:23:16.796 [2024-07-14 20:23:05.769001] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:16.796 [2024-07-14 20:23:05.769008] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:16.796 [2024-07-14 20:23:05.769012] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:16.796 [2024-07-14 20:23:05.769015] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1587580): datao=0, datal=4096, cccid=4 00:23:16.796 [2024-07-14 20:23:05.769020] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15d3c40) on tqpair(0x1587580): expected_datao=0, payload_size=4096 00:23:16.796 [2024-07-14 20:23:05.769025] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:16.796 [2024-07-14 20:23:05.769032] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:16.796 [2024-07-14 20:23:05.769036] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:16.796 [2024-07-14 20:23:05.769045] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:16.796 [2024-07-14 20:23:05.769051] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:16.796 [2024-07-14 20:23:05.769055] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:16.796 [2024-07-14 20:23:05.769059] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15d3c40) on tqpair=0x1587580 00:23:16.796 [2024-07-14 20:23:05.769075] nvme_ctrlr.c:4570:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:23:16.796 [2024-07-14 20:23:05.769087] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:23:16.796 [2024-07-14 20:23:05.769098] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:23:16.796 [2024-07-14 20:23:05.769106] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:16.796 [2024-07-14 20:23:05.769111] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1587580) 00:23:16.796 [2024-07-14 20:23:05.769118] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.796 [2024-07-14 20:23:05.769140] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15d3c40, cid 4, qid 0 00:23:16.797 [2024-07-14 20:23:05.769214] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:16.797 [2024-07-14 20:23:05.769221] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:16.797 [2024-07-14 20:23:05.769225] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:16.797 [2024-07-14 20:23:05.769229] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1587580): datao=0, datal=4096, cccid=4 00:23:16.797 [2024-07-14 20:23:05.769233] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15d3c40) on tqpair(0x1587580): expected_datao=0, payload_size=4096 00:23:16.797 [2024-07-14 20:23:05.769238] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:16.797 [2024-07-14 20:23:05.769245] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:16.797 [2024-07-14 20:23:05.769249] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:16.797 [2024-07-14 20:23:05.769258] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:16.797 [2024-07-14 20:23:05.769264] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:16.797 [2024-07-14 20:23:05.769267] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:16.797 [2024-07-14 20:23:05.769271] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15d3c40) on tqpair=0x1587580 00:23:16.797 [2024-07-14 20:23:05.769285] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:23:16.797 [2024-07-14 20:23:05.769295] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:23:16.797 [2024-07-14 20:23:05.769304] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:16.797 [2024-07-14 20:23:05.769308] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1587580) 00:23:16.797 [2024-07-14 20:23:05.769315] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.797 [2024-07-14 20:23:05.769337] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15d3c40, cid 4, qid 0 00:23:16.797 [2024-07-14 20:23:05.769406] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:16.797 [2024-07-14 20:23:05.769412] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:16.797 [2024-07-14 20:23:05.769416] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:16.797 [2024-07-14 20:23:05.769420] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1587580): datao=0, datal=4096, cccid=4 00:23:16.797 [2024-07-14 20:23:05.769425] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15d3c40) on tqpair(0x1587580): expected_datao=0, payload_size=4096 00:23:16.797 [2024-07-14 20:23:05.769429] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:16.797 [2024-07-14 20:23:05.769436] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:16.797 [2024-07-14 20:23:05.769440] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:16.797 [2024-07-14 20:23:05.769449] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:16.797 [2024-07-14 20:23:05.769455] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:16.797 [2024-07-14 20:23:05.769458] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:16.797 [2024-07-14 20:23:05.769462] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15d3c40) on tqpair=0x1587580 00:23:16.797 [2024-07-14 20:23:05.769472] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:23:16.797 [2024-07-14 20:23:05.769481] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:23:16.797 [2024-07-14 20:23:05.769492] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:23:16.797 [2024-07-14 20:23:05.769499] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:23:16.797 [2024-07-14 20:23:05.769505] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:23:16.797 [2024-07-14 20:23:05.769510] nvme_ctrlr.c:2992:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:23:16.797 [2024-07-14 20:23:05.769515] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:23:16.797 [2024-07-14 20:23:05.769520] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:23:16.797 [2024-07-14 20:23:05.769542] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:16.797 [2024-07-14 20:23:05.769547] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1587580) 00:23:16.797 [2024-07-14 20:23:05.769555] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.797 [2024-07-14 20:23:05.769562] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:16.797 [2024-07-14 20:23:05.769567] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:16.797 [2024-07-14 20:23:05.769570] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1587580) 00:23:16.797 [2024-07-14 20:23:05.769577] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:16.797 [2024-07-14 20:23:05.769604] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15d3c40, cid 4, qid 0 00:23:16.797 [2024-07-14 20:23:05.769611] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15d3da0, cid 5, qid 0 00:23:16.797 [2024-07-14 20:23:05.769682] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:16.797 [2024-07-14 20:23:05.769689] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:16.797 [2024-07-14 20:23:05.769692] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:16.797 [2024-07-14 20:23:05.769696] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15d3c40) on tqpair=0x1587580 00:23:16.797 [2024-07-14 20:23:05.769705] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:16.797 [2024-07-14 20:23:05.769711] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:16.797 [2024-07-14 20:23:05.769714] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:16.797 [2024-07-14 20:23:05.769718] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15d3da0) on tqpair=0x1587580 00:23:16.797 [2024-07-14 20:23:05.769730] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:16.797 [2024-07-14 20:23:05.769734] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1587580) 00:23:16.797 [2024-07-14 20:23:05.769741] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.797 [2024-07-14 20:23:05.769762] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15d3da0, cid 5, qid 0 00:23:16.797 [2024-07-14 20:23:05.769825] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:16.797 [2024-07-14 20:23:05.769832] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:16.797 [2024-07-14 20:23:05.769836] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:16.797 [2024-07-14 20:23:05.769840] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15d3da0) on tqpair=0x1587580 00:23:16.797 [2024-07-14 20:23:05.769852] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:16.797 [2024-07-14 20:23:05.769870] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1587580) 00:23:16.797 [2024-07-14 20:23:05.769878] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.797 [2024-07-14 20:23:05.769901] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15d3da0, cid 5, qid 0 00:23:16.797 [2024-07-14 20:23:05.769961] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:16.797 [2024-07-14 20:23:05.769968] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:16.797 [2024-07-14 20:23:05.769972] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:16.797 [2024-07-14 20:23:05.769976] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15d3da0) on tqpair=0x1587580 00:23:16.797 [2024-07-14 20:23:05.769988] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:16.797 [2024-07-14 20:23:05.769992] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1587580) 00:23:16.797 [2024-07-14 20:23:05.770000] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.797 [2024-07-14 20:23:05.770020] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15d3da0, cid 5, qid 0 00:23:16.797 [2024-07-14 20:23:05.770078] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:16.797 [2024-07-14 20:23:05.770085] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:16.797 [2024-07-14 20:23:05.770088] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:16.797 [2024-07-14 20:23:05.770092] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15d3da0) on tqpair=0x1587580 00:23:16.797 [2024-07-14 20:23:05.770107] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:16.797 [2024-07-14 20:23:05.770111] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1587580) 00:23:16.797 [2024-07-14 20:23:05.770119] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.797 [2024-07-14 20:23:05.770127] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:16.797 [2024-07-14 20:23:05.770131] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1587580) 00:23:16.797 [2024-07-14 20:23:05.770137] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.797 [2024-07-14 20:23:05.770144] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:16.797 [2024-07-14 20:23:05.770149] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1587580) 00:23:16.797 [2024-07-14 20:23:05.770155] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.797 [2024-07-14 20:23:05.770163] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:16.797 [2024-07-14 20:23:05.770167] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1587580) 00:23:16.797 [2024-07-14 20:23:05.770173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.797 [2024-07-14 20:23:05.770195] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15d3da0, cid 5, qid 0 00:23:16.797 [2024-07-14 20:23:05.770203] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15d3c40, cid 4, qid 0 00:23:16.797 [2024-07-14 20:23:05.770208] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15d3f00, cid 6, qid 0 00:23:16.797 [2024-07-14 20:23:05.770212] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15d4060, cid 7, qid 0 00:23:16.797 [2024-07-14 20:23:05.770349] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:16.797 [2024-07-14 20:23:05.770356] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:16.797 [2024-07-14 20:23:05.770360] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:16.797 [2024-07-14 20:23:05.770364] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1587580): datao=0, datal=8192, cccid=5 00:23:16.797 [2024-07-14 20:23:05.770368] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15d3da0) on tqpair(0x1587580): expected_datao=0, payload_size=8192 00:23:16.797 [2024-07-14 20:23:05.770373] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:16.797 [2024-07-14 20:23:05.770390] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:16.797 [2024-07-14 20:23:05.770395] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:16.797 [2024-07-14 20:23:05.770401] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:16.797 [2024-07-14 20:23:05.770407] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:16.797 [2024-07-14 20:23:05.770410] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:16.797 [2024-07-14 20:23:05.770414] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1587580): datao=0, datal=512, cccid=4 00:23:16.798 [2024-07-14 20:23:05.770419] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15d3c40) on tqpair(0x1587580): expected_datao=0, payload_size=512 00:23:16.798 [2024-07-14 20:23:05.770423] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:16.798 [2024-07-14 20:23:05.770429] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:16.798 [2024-07-14 20:23:05.770433] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:16.798 [2024-07-14 20:23:05.770439] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:16.798 [2024-07-14 20:23:05.770444] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:16.798 [2024-07-14 20:23:05.770448] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:16.798 [2024-07-14 20:23:05.770452] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1587580): datao=0, datal=512, cccid=6 00:23:16.798 [2024-07-14 20:23:05.770456] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15d3f00) on tqpair(0x1587580): expected_datao=0, payload_size=512 00:23:16.798 [2024-07-14 20:23:05.770461] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:16.798 [2024-07-14 20:23:05.770467] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:16.798 [2024-07-14 20:23:05.770471] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:16.798 [2024-07-14 20:23:05.770476] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:16.798 [2024-07-14 20:23:05.770482] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:16.798 [2024-07-14 20:23:05.770486] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:16.798 [2024-07-14 20:23:05.770489] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1587580): datao=0, datal=4096, cccid=7 00:23:16.798 [2024-07-14 20:23:05.770494] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15d4060) on tqpair(0x1587580): expected_datao=0, payload_size=4096 00:23:16.798 [2024-07-14 20:23:05.770498] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:16.798 [2024-07-14 20:23:05.770504] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:16.798 [2024-07-14 20:23:05.770508] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:16.798 [2024-07-14 20:23:05.770517] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:16.798 [2024-07-14 20:23:05.770523] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:16.798 [2024-07-14 20:23:05.770526] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:16.798 [2024-07-14 20:23:05.770530] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15d3da0) on tqpair=0x1587580 00:23:16.798 [2024-07-14 20:23:05.770549] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:16.798 ===================================================== 00:23:16.798 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:16.798 ===================================================== 00:23:16.798 Controller Capabilities/Features 00:23:16.798 ================================ 00:23:16.798 Vendor ID: 8086 00:23:16.798 Subsystem Vendor ID: 8086 00:23:16.798 Serial Number: SPDK00000000000001 00:23:16.798 Model Number: SPDK bdev Controller 00:23:16.798 Firmware Version: 24.05.1 00:23:16.798 Recommended Arb Burst: 6 00:23:16.798 IEEE OUI Identifier: e4 d2 5c 00:23:16.798 Multi-path I/O 00:23:16.798 May have multiple subsystem ports: Yes 00:23:16.798 May have multiple controllers: Yes 00:23:16.798 Associated with SR-IOV VF: No 00:23:16.798 Max Data Transfer Size: 131072 00:23:16.798 Max Number of Namespaces: 32 00:23:16.798 Max Number of I/O Queues: 127 00:23:16.798 NVMe Specification Version (VS): 1.3 00:23:16.798 NVMe Specification Version (Identify): 1.3 00:23:16.798 Maximum Queue Entries: 128 00:23:16.798 Contiguous Queues Required: Yes 00:23:16.798 Arbitration Mechanisms Supported 00:23:16.798 Weighted Round Robin: Not Supported 00:23:16.798 Vendor Specific: Not Supported 00:23:16.798 Reset Timeout: 15000 ms 00:23:16.798 Doorbell Stride: 4 bytes 00:23:16.798 NVM Subsystem Reset: Not Supported 00:23:16.798 Command Sets Supported 00:23:16.798 NVM Command Set: Supported 00:23:16.798 Boot Partition: Not Supported 00:23:16.798 Memory Page Size Minimum: 4096 bytes 00:23:16.798 Memory Page Size Maximum: 4096 bytes 00:23:16.798 Persistent Memory Region: Not Supported 00:23:16.798 Optional Asynchronous Events Supported 00:23:16.798 Namespace Attribute Notices: Supported 00:23:16.798 Firmware Activation Notices: Not Supported 00:23:16.798 ANA Change Notices: Not Supported 00:23:16.798 PLE Aggregate Log Change Notices: Not Supported 00:23:16.798 LBA Status Info Alert Notices: Not Supported 00:23:16.798 EGE Aggregate Log Change Notices: Not Supported 00:23:16.798 Normal NVM Subsystem Shutdown event: Not Supported 00:23:16.798 Zone Descriptor Change Notices: Not Supported 00:23:16.798 Discovery Log Change Notices: Not Supported 00:23:16.798 Controller Attributes 00:23:16.798 128-bit Host Identifier: Supported 00:23:16.798 Non-Operational Permissive Mode: Not Supported 00:23:16.798 NVM Sets: Not Supported 00:23:16.798 Read Recovery Levels: Not Supported 00:23:16.798 Endurance Groups: Not Supported 00:23:16.798 Predictable Latency Mode: Not Supported 00:23:16.798 Traffic Based Keep ALive: Not Supported 00:23:16.798 Namespace Granularity: Not Supported 00:23:16.798 SQ Associations: Not Supported 00:23:16.798 UUID List: Not Supported 00:23:16.798 Multi-Domain Subsystem: Not Supported 00:23:16.798 Fixed Capacity Management: Not Supported 00:23:16.798 Variable Capacity Management: Not Supported 00:23:16.798 Delete Endurance Group: Not Supported 00:23:16.798 Delete NVM Set: Not Supported 00:23:16.798 Extended LBA Formats Supported: Not Supported 00:23:16.798 Flexible Data Placement Supported: Not Supported 00:23:16.798 00:23:16.798 Controller Memory Buffer Support 00:23:16.798 ================================ 00:23:16.798 Supported: No 00:23:16.798 00:23:16.798 Persistent Memory Region Support 00:23:16.798 ================================ 00:23:16.798 Supported: No 00:23:16.798 00:23:16.798 Admin Command Set Attributes 00:23:16.798 ============================ 00:23:16.798 Security Send/Receive: Not Supported 00:23:16.798 Format NVM: Not Supported 00:23:16.798 Firmware Activate/Download: Not Supported 00:23:16.798 Namespace Management: Not Supported 00:23:16.798 Device Self-Test: Not Supported 00:23:16.798 Directives: Not Supported 00:23:16.798 NVMe-MI: Not Supported 00:23:16.798 Virtualization Management: Not Supported 00:23:16.798 Doorbell Buffer Config: Not Supported 00:23:16.798 Get LBA Status Capability: Not Supported 00:23:16.798 Command & Feature Lockdown Capability: Not Supported 00:23:16.798 Abort Command Limit: 4 00:23:16.798 Async Event Request Limit: 4 00:23:16.798 Number of Firmware Slots: N/A 00:23:16.798 Firmware Slot 1 Read-Only: N/A 00:23:16.798 Firmware Activation Without Reset: [2024-07-14 20:23:05.770556] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:16.798 [2024-07-14 20:23:05.770560] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:16.798 [2024-07-14 20:23:05.770564] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15d3c40) on tqpair=0x1587580 00:23:16.798 [2024-07-14 20:23:05.770575] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:16.798 [2024-07-14 20:23:05.770581] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:16.798 [2024-07-14 20:23:05.770585] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:16.798 [2024-07-14 20:23:05.770589] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15d3f00) on tqpair=0x1587580 00:23:16.798 [2024-07-14 20:23:05.770600] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:16.798 [2024-07-14 20:23:05.770606] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:16.798 [2024-07-14 20:23:05.770609] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:16.798 [2024-07-14 20:23:05.770613] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15d4060) on tqpair=0x1587580 00:23:16.798 N/A 00:23:16.798 Multiple Update Detection Support: N/A 00:23:16.798 Firmware Update Granularity: No Information Provided 00:23:16.798 Per-Namespace SMART Log: No 00:23:16.798 Asymmetric Namespace Access Log Page: Not Supported 00:23:16.798 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:23:16.798 Command Effects Log Page: Supported 00:23:16.798 Get Log Page Extended Data: Supported 00:23:16.798 Telemetry Log Pages: Not Supported 00:23:16.798 Persistent Event Log Pages: Not Supported 00:23:16.798 Supported Log Pages Log Page: May Support 00:23:16.798 Commands Supported & Effects Log Page: Not Supported 00:23:16.798 Feature Identifiers & Effects Log Page:May Support 00:23:16.798 NVMe-MI Commands & Effects Log Page: May Support 00:23:16.798 Data Area 4 for Telemetry Log: Not Supported 00:23:16.798 Error Log Page Entries Supported: 128 00:23:16.798 Keep Alive: Supported 00:23:16.798 Keep Alive Granularity: 10000 ms 00:23:16.798 00:23:16.798 NVM Command Set Attributes 00:23:16.798 ========================== 00:23:16.798 Submission Queue Entry Size 00:23:16.798 Max: 64 00:23:16.798 Min: 64 00:23:16.798 Completion Queue Entry Size 00:23:16.798 Max: 16 00:23:16.798 Min: 16 00:23:16.798 Number of Namespaces: 32 00:23:16.798 Compare Command: Supported 00:23:16.798 Write Uncorrectable Command: Not Supported 00:23:16.798 Dataset Management Command: Supported 00:23:16.798 Write Zeroes Command: Supported 00:23:16.798 Set Features Save Field: Not Supported 00:23:16.798 Reservations: Supported 00:23:16.798 Timestamp: Not Supported 00:23:16.798 Copy: Supported 00:23:16.798 Volatile Write Cache: Present 00:23:16.798 Atomic Write Unit (Normal): 1 00:23:16.798 Atomic Write Unit (PFail): 1 00:23:16.798 Atomic Compare & Write Unit: 1 00:23:16.798 Fused Compare & Write: Supported 00:23:16.798 Scatter-Gather List 00:23:16.798 SGL Command Set: Supported 00:23:16.798 SGL Keyed: Supported 00:23:16.798 SGL Bit Bucket Descriptor: Not Supported 00:23:16.798 SGL Metadata Pointer: Not Supported 00:23:16.798 Oversized SGL: Not Supported 00:23:16.798 SGL Metadata Address: Not Supported 00:23:16.798 SGL Offset: Supported 00:23:16.798 Transport SGL Data Block: Not Supported 00:23:16.798 Replay Protected Memory Block: Not Supported 00:23:16.798 00:23:16.798 Firmware Slot Information 00:23:16.798 ========================= 00:23:16.799 Active slot: 1 00:23:16.799 Slot 1 Firmware Revision: 24.05.1 00:23:16.799 00:23:16.799 00:23:16.799 Commands Supported and Effects 00:23:16.799 ============================== 00:23:16.799 Admin Commands 00:23:16.799 -------------- 00:23:16.799 Get Log Page (02h): Supported 00:23:16.799 Identify (06h): Supported 00:23:16.799 Abort (08h): Supported 00:23:16.799 Set Features (09h): Supported 00:23:16.799 Get Features (0Ah): Supported 00:23:16.799 Asynchronous Event Request (0Ch): Supported 00:23:16.799 Keep Alive (18h): Supported 00:23:16.799 I/O Commands 00:23:16.799 ------------ 00:23:16.799 Flush (00h): Supported LBA-Change 00:23:16.799 Write (01h): Supported LBA-Change 00:23:16.799 Read (02h): Supported 00:23:16.799 Compare (05h): Supported 00:23:16.799 Write Zeroes (08h): Supported LBA-Change 00:23:16.799 Dataset Management (09h): Supported LBA-Change 00:23:16.799 Copy (19h): Supported LBA-Change 00:23:16.799 Unknown (79h): Supported LBA-Change 00:23:16.799 Unknown (7Ah): Supported 00:23:16.799 00:23:16.799 Error Log 00:23:16.799 ========= 00:23:16.799 00:23:16.799 Arbitration 00:23:16.799 =========== 00:23:16.799 Arbitration Burst: 1 00:23:16.799 00:23:16.799 Power Management 00:23:16.799 ================ 00:23:16.799 Number of Power States: 1 00:23:16.799 Current Power State: Power State #0 00:23:16.799 Power State #0: 00:23:16.799 Max Power: 0.00 W 00:23:16.799 Non-Operational State: Operational 00:23:16.799 Entry Latency: Not Reported 00:23:16.799 Exit Latency: Not Reported 00:23:16.799 Relative Read Throughput: 0 00:23:16.799 Relative Read Latency: 0 00:23:16.799 Relative Write Throughput: 0 00:23:16.799 Relative Write Latency: 0 00:23:16.799 Idle Power: Not Reported 00:23:16.799 Active Power: Not Reported 00:23:16.799 Non-Operational Permissive Mode: Not Supported 00:23:16.799 00:23:16.799 Health Information 00:23:16.799 ================== 00:23:16.799 Critical Warnings: 00:23:16.799 Available Spare Space: OK 00:23:16.799 Temperature: OK 00:23:16.799 Device Reliability: OK 00:23:16.799 Read Only: No 00:23:16.799 Volatile Memory Backup: OK 00:23:16.799 Current Temperature: 0 Kelvin (-273 Celsius) 00:23:16.799 Temperature Threshold: [2024-07-14 20:23:05.770728] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:16.799 [2024-07-14 20:23:05.770736] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1587580) 00:23:16.799 [2024-07-14 20:23:05.770744] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.799 [2024-07-14 20:23:05.770770] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15d4060, cid 7, qid 0 00:23:16.799 [2024-07-14 20:23:05.770836] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:16.799 [2024-07-14 20:23:05.770843] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:16.799 [2024-07-14 20:23:05.770847] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:16.799 [2024-07-14 20:23:05.770851] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15d4060) on tqpair=0x1587580 00:23:16.799 [2024-07-14 20:23:05.774992] nvme_ctrlr.c:4234:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:23:16.799 [2024-07-14 20:23:05.775016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.799 [2024-07-14 20:23:05.775024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.799 [2024-07-14 20:23:05.775031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.799 [2024-07-14 20:23:05.775037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.799 [2024-07-14 20:23:05.775054] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:16.799 [2024-07-14 20:23:05.775059] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:16.799 [2024-07-14 20:23:05.775063] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1587580) 00:23:16.799 [2024-07-14 20:23:05.775072] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.799 [2024-07-14 20:23:05.775105] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15d3ae0, cid 3, qid 0 00:23:16.799 [2024-07-14 20:23:05.775198] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:16.799 [2024-07-14 20:23:05.775205] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:16.799 [2024-07-14 20:23:05.775209] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:16.799 [2024-07-14 20:23:05.775213] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15d3ae0) on tqpair=0x1587580 00:23:16.799 [2024-07-14 20:23:05.775222] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:16.799 [2024-07-14 20:23:05.775226] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:16.799 [2024-07-14 20:23:05.775230] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1587580) 00:23:16.799 [2024-07-14 20:23:05.775253] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.799 [2024-07-14 20:23:05.775278] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15d3ae0, cid 3, qid 0 00:23:16.799 [2024-07-14 20:23:05.775390] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:16.799 [2024-07-14 20:23:05.775396] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:16.799 [2024-07-14 20:23:05.775400] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:16.799 [2024-07-14 20:23:05.775404] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15d3ae0) on tqpair=0x1587580 00:23:16.799 [2024-07-14 20:23:05.775410] nvme_ctrlr.c:1084:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:23:16.799 [2024-07-14 20:23:05.775414] nvme_ctrlr.c:1087:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:23:16.799 [2024-07-14 20:23:05.775425] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:16.799 [2024-07-14 20:23:05.775429] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:16.799 [2024-07-14 20:23:05.775433] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1587580) 00:23:16.799 [2024-07-14 20:23:05.775440] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.799 [2024-07-14 20:23:05.775460] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15d3ae0, cid 3, qid 0 00:23:16.799 [2024-07-14 20:23:05.775517] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:16.799 [2024-07-14 20:23:05.775524] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:16.799 [2024-07-14 20:23:05.775527] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:16.799 [2024-07-14 20:23:05.775531] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15d3ae0) on tqpair=0x1587580 00:23:16.799 [2024-07-14 20:23:05.775543] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:16.799 [2024-07-14 20:23:05.775548] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:16.799 [2024-07-14 20:23:05.775551] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1587580) 00:23:16.799 [2024-07-14 20:23:05.775559] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.799 [2024-07-14 20:23:05.775578] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15d3ae0, cid 3, qid 0 00:23:16.799 [2024-07-14 20:23:05.775630] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:16.799 [2024-07-14 20:23:05.775636] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:16.799 [2024-07-14 20:23:05.775640] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:16.799 [2024-07-14 20:23:05.775644] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15d3ae0) on tqpair=0x1587580 00:23:16.799 [2024-07-14 20:23:05.775655] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:16.799 [2024-07-14 20:23:05.775659] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:16.799 [2024-07-14 20:23:05.775663] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1587580) 00:23:16.799 [2024-07-14 20:23:05.775670] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.799 [2024-07-14 20:23:05.775691] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15d3ae0, cid 3, qid 0 00:23:16.799 [2024-07-14 20:23:05.775744] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:16.799 [2024-07-14 20:23:05.775750] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:16.799 [2024-07-14 20:23:05.775754] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:16.799 [2024-07-14 20:23:05.775757] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15d3ae0) on tqpair=0x1587580 00:23:16.799 [2024-07-14 20:23:05.775768] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:16.800 [2024-07-14 20:23:05.775773] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:16.800 [2024-07-14 20:23:05.775776] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1587580) 00:23:16.800 [2024-07-14 20:23:05.775784] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.800 [2024-07-14 20:23:05.775803] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15d3ae0, cid 3, qid 0 00:23:16.800 [2024-07-14 20:23:05.775864] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:16.800 [2024-07-14 20:23:05.775870] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:16.800 [2024-07-14 20:23:05.775873] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:16.800 [2024-07-14 20:23:05.775877] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15d3ae0) on tqpair=0x1587580 00:23:16.800 [2024-07-14 20:23:05.775902] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:16.800 [2024-07-14 20:23:05.775909] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:16.800 [2024-07-14 20:23:05.775912] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1587580) 00:23:16.800 [2024-07-14 20:23:05.775920] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.800 [2024-07-14 20:23:05.775953] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15d3ae0, cid 3, qid 0 00:23:16.800 [2024-07-14 20:23:05.776015] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:16.800 [2024-07-14 20:23:05.776022] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:16.800 [2024-07-14 20:23:05.776025] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:16.800 [2024-07-14 20:23:05.776029] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15d3ae0) on tqpair=0x1587580 00:23:16.800 [2024-07-14 20:23:05.776040] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:16.800 [2024-07-14 20:23:05.776045] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:16.800 [2024-07-14 20:23:05.776049] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1587580) 00:23:16.800 [2024-07-14 20:23:05.776056] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.800 [2024-07-14 20:23:05.776076] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15d3ae0, cid 3, qid 0 00:23:16.800 [2024-07-14 20:23:05.776129] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:16.800 [2024-07-14 20:23:05.776135] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:16.800 [2024-07-14 20:23:05.776139] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:16.800 [2024-07-14 20:23:05.776143] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15d3ae0) on tqpair=0x1587580 00:23:16.800 [2024-07-14 20:23:05.776154] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:16.800 [2024-07-14 20:23:05.776158] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:16.800 [2024-07-14 20:23:05.776162] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1587580) 00:23:16.800 [2024-07-14 20:23:05.776169] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.800 [2024-07-14 20:23:05.776188] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15d3ae0, cid 3, qid 0 00:23:16.800 [2024-07-14 20:23:05.776248] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:16.800 [2024-07-14 20:23:05.776257] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:16.800 [2024-07-14 20:23:05.776260] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:16.800 [2024-07-14 20:23:05.776264] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15d3ae0) on tqpair=0x1587580 00:23:16.800 [2024-07-14 20:23:05.776276] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:16.800 [2024-07-14 20:23:05.776280] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:16.800 [2024-07-14 20:23:05.776284] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1587580) 00:23:16.800 [2024-07-14 20:23:05.776291] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.800 [2024-07-14 20:23:05.776323] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15d3ae0, cid 3, qid 0 00:23:16.800 [2024-07-14 20:23:05.776380] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:16.800 [2024-07-14 20:23:05.776387] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:16.800 [2024-07-14 20:23:05.776390] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:16.800 [2024-07-14 20:23:05.776394] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15d3ae0) on tqpair=0x1587580 00:23:16.800 [2024-07-14 20:23:05.776405] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:16.800 [2024-07-14 20:23:05.776409] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:16.800 [2024-07-14 20:23:05.776413] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1587580) 00:23:16.800 [2024-07-14 20:23:05.776420] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.800 [2024-07-14 20:23:05.776440] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15d3ae0, cid 3, qid 0 00:23:16.800 [2024-07-14 20:23:05.776495] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:16.800 [2024-07-14 20:23:05.776502] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:16.800 [2024-07-14 20:23:05.776506] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:16.800 [2024-07-14 20:23:05.776510] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15d3ae0) on tqpair=0x1587580 00:23:16.800 [2024-07-14 20:23:05.776521] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:16.800 [2024-07-14 20:23:05.776526] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:16.800 [2024-07-14 20:23:05.776530] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1587580) 00:23:16.800 [2024-07-14 20:23:05.776537] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.800 [2024-07-14 20:23:05.776557] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15d3ae0, cid 3, qid 0 00:23:16.800 [2024-07-14 20:23:05.776614] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:16.800 [2024-07-14 20:23:05.776621] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:16.800 [2024-07-14 20:23:05.776625] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:16.800 [2024-07-14 20:23:05.776629] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15d3ae0) on tqpair=0x1587580 00:23:16.800 [2024-07-14 20:23:05.776640] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:16.800 [2024-07-14 20:23:05.776645] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:16.800 [2024-07-14 20:23:05.776648] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1587580) 00:23:16.800 [2024-07-14 20:23:05.776656] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.800 [2024-07-14 20:23:05.776676] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15d3ae0, cid 3, qid 0 00:23:16.800 [2024-07-14 20:23:05.776734] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:16.800 [2024-07-14 20:23:05.776740] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:16.800 [2024-07-14 20:23:05.776744] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:16.800 [2024-07-14 20:23:05.776748] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15d3ae0) on tqpair=0x1587580 00:23:16.800 [2024-07-14 20:23:05.776759] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:16.800 [2024-07-14 20:23:05.776763] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:16.800 [2024-07-14 20:23:05.776767] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1587580) 00:23:16.800 [2024-07-14 20:23:05.776774] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.800 [2024-07-14 20:23:05.776794] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15d3ae0, cid 3, qid 0 00:23:16.800 [2024-07-14 20:23:05.776848] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:16.800 [2024-07-14 20:23:05.776871] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:16.800 [2024-07-14 20:23:05.776877] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:16.800 [2024-07-14 20:23:05.776881] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15d3ae0) on tqpair=0x1587580 00:23:16.800 [2024-07-14 20:23:05.776893] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:16.800 [2024-07-14 20:23:05.776898] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:16.800 [2024-07-14 20:23:05.776901] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1587580) 00:23:16.800 [2024-07-14 20:23:05.776909] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.800 [2024-07-14 20:23:05.776932] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15d3ae0, cid 3, qid 0 00:23:16.800 [2024-07-14 20:23:05.776991] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:16.800 [2024-07-14 20:23:05.776998] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:16.800 [2024-07-14 20:23:05.777001] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:16.800 [2024-07-14 20:23:05.777005] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15d3ae0) on tqpair=0x1587580 00:23:16.800 [2024-07-14 20:23:05.777017] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:16.800 [2024-07-14 20:23:05.777021] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:16.800 [2024-07-14 20:23:05.777025] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1587580) 00:23:16.800 [2024-07-14 20:23:05.777032] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.800 [2024-07-14 20:23:05.777052] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15d3ae0, cid 3, qid 0 00:23:16.800 [2024-07-14 20:23:05.777106] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:16.800 [2024-07-14 20:23:05.777112] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:16.800 [2024-07-14 20:23:05.777116] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:16.800 [2024-07-14 20:23:05.777120] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15d3ae0) on tqpair=0x1587580 00:23:16.800 [2024-07-14 20:23:05.777131] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:16.800 [2024-07-14 20:23:05.777136] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:16.800 [2024-07-14 20:23:05.777139] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1587580) 00:23:16.800 [2024-07-14 20:23:05.777146] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.800 [2024-07-14 20:23:05.777166] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15d3ae0, cid 3, qid 0 00:23:16.800 [2024-07-14 20:23:05.777219] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:16.800 [2024-07-14 20:23:05.777225] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:16.800 [2024-07-14 20:23:05.777229] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:16.800 [2024-07-14 20:23:05.777233] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15d3ae0) on tqpair=0x1587580 00:23:16.800 [2024-07-14 20:23:05.777244] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:16.800 [2024-07-14 20:23:05.777248] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:16.800 [2024-07-14 20:23:05.777252] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1587580) 00:23:16.800 [2024-07-14 20:23:05.777259] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.801 [2024-07-14 20:23:05.777279] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15d3ae0, cid 3, qid 0 00:23:16.801 [2024-07-14 20:23:05.777333] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:16.801 [2024-07-14 20:23:05.777339] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:16.801 [2024-07-14 20:23:05.777342] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:16.801 [2024-07-14 20:23:05.777346] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15d3ae0) on tqpair=0x1587580 00:23:16.801 [2024-07-14 20:23:05.777357] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:16.801 [2024-07-14 20:23:05.777362] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:16.801 [2024-07-14 20:23:05.777365] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1587580) 00:23:16.801 [2024-07-14 20:23:05.777372] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.801 [2024-07-14 20:23:05.777392] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15d3ae0, cid 3, qid 0 00:23:16.801 [2024-07-14 20:23:05.777446] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:16.801 [2024-07-14 20:23:05.777453] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:16.801 [2024-07-14 20:23:05.777457] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:16.801 [2024-07-14 20:23:05.777460] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15d3ae0) on tqpair=0x1587580 00:23:16.801 [2024-07-14 20:23:05.777471] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:16.801 [2024-07-14 20:23:05.777476] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:16.801 [2024-07-14 20:23:05.777479] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1587580) 00:23:16.801 [2024-07-14 20:23:05.777486] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.801 [2024-07-14 20:23:05.777506] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15d3ae0, cid 3, qid 0 00:23:16.801 [2024-07-14 20:23:05.777560] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:16.801 [2024-07-14 20:23:05.777566] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:16.801 [2024-07-14 20:23:05.777570] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:16.801 [2024-07-14 20:23:05.777573] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15d3ae0) on tqpair=0x1587580 00:23:16.801 [2024-07-14 20:23:05.777584] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:16.801 [2024-07-14 20:23:05.777589] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:16.801 [2024-07-14 20:23:05.777593] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1587580) 00:23:16.801 [2024-07-14 20:23:05.777600] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.801 [2024-07-14 20:23:05.777620] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15d3ae0, cid 3, qid 0 00:23:16.801 [2024-07-14 20:23:05.777675] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:16.801 [2024-07-14 20:23:05.777682] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:16.801 [2024-07-14 20:23:05.777685] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:16.801 [2024-07-14 20:23:05.777689] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15d3ae0) on tqpair=0x1587580 00:23:16.801 [2024-07-14 20:23:05.777700] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:16.801 [2024-07-14 20:23:05.777704] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:16.801 [2024-07-14 20:23:05.777708] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1587580) 00:23:16.801 [2024-07-14 20:23:05.777715] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.801 [2024-07-14 20:23:05.777735] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15d3ae0, cid 3, qid 0 00:23:16.801 [2024-07-14 20:23:05.777784] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:16.801 [2024-07-14 20:23:05.777791] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:16.801 [2024-07-14 20:23:05.777794] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:16.801 [2024-07-14 20:23:05.777798] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15d3ae0) on tqpair=0x1587580 00:23:16.801 [2024-07-14 20:23:05.777809] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:16.801 [2024-07-14 20:23:05.777814] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:16.801 [2024-07-14 20:23:05.777817] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1587580) 00:23:16.801 [2024-07-14 20:23:05.777824] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.801 [2024-07-14 20:23:05.777843] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15d3ae0, cid 3, qid 0 00:23:16.801 [2024-07-14 20:23:05.777929] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:16.801 [2024-07-14 20:23:05.777937] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:16.801 [2024-07-14 20:23:05.777941] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:16.801 [2024-07-14 20:23:05.777945] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15d3ae0) on tqpair=0x1587580 00:23:16.801 [2024-07-14 20:23:05.777956] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:16.801 [2024-07-14 20:23:05.777961] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:16.801 [2024-07-14 20:23:05.777965] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1587580) 00:23:16.801 [2024-07-14 20:23:05.777972] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.801 [2024-07-14 20:23:05.777994] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15d3ae0, cid 3, qid 0 00:23:16.801 [2024-07-14 20:23:05.778051] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:16.801 [2024-07-14 20:23:05.778058] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:16.801 [2024-07-14 20:23:05.778061] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:16.801 [2024-07-14 20:23:05.778065] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15d3ae0) on tqpair=0x1587580 00:23:16.801 [2024-07-14 20:23:05.778076] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:16.801 [2024-07-14 20:23:05.778081] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:16.801 [2024-07-14 20:23:05.778085] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1587580) 00:23:16.801 [2024-07-14 20:23:05.778092] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.801 [2024-07-14 20:23:05.778111] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15d3ae0, cid 3, qid 0 00:23:16.801 [2024-07-14 20:23:05.778164] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:16.801 [2024-07-14 20:23:05.778171] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:16.801 [2024-07-14 20:23:05.778175] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:16.801 [2024-07-14 20:23:05.778178] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15d3ae0) on tqpair=0x1587580 00:23:16.801 [2024-07-14 20:23:05.778189] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:16.801 [2024-07-14 20:23:05.778194] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:16.801 [2024-07-14 20:23:05.778197] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1587580) 00:23:16.801 [2024-07-14 20:23:05.778205] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.801 [2024-07-14 20:23:05.778224] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15d3ae0, cid 3, qid 0 00:23:16.801 [2024-07-14 20:23:05.778292] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:16.801 [2024-07-14 20:23:05.778298] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:16.801 [2024-07-14 20:23:05.778302] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:16.801 [2024-07-14 20:23:05.778306] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15d3ae0) on tqpair=0x1587580 00:23:16.801 [2024-07-14 20:23:05.778317] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:16.801 [2024-07-14 20:23:05.778321] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:16.801 [2024-07-14 20:23:05.778325] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1587580) 00:23:16.801 [2024-07-14 20:23:05.778332] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.801 [2024-07-14 20:23:05.778351] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15d3ae0, cid 3, qid 0 00:23:16.801 [2024-07-14 20:23:05.778406] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:16.801 [2024-07-14 20:23:05.778413] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:16.801 [2024-07-14 20:23:05.778416] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:16.801 [2024-07-14 20:23:05.778420] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15d3ae0) on tqpair=0x1587580 00:23:16.801 [2024-07-14 20:23:05.778431] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:16.801 [2024-07-14 20:23:05.778435] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:16.801 [2024-07-14 20:23:05.778439] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1587580) 00:23:16.801 [2024-07-14 20:23:05.778446] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.801 [2024-07-14 20:23:05.778465] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15d3ae0, cid 3, qid 0 00:23:16.801 [2024-07-14 20:23:05.778516] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:16.801 [2024-07-14 20:23:05.778522] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:16.801 [2024-07-14 20:23:05.778525] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:16.801 [2024-07-14 20:23:05.778529] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15d3ae0) on tqpair=0x1587580 00:23:16.801 [2024-07-14 20:23:05.778540] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:16.801 [2024-07-14 20:23:05.778545] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:16.801 [2024-07-14 20:23:05.778548] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1587580) 00:23:16.801 [2024-07-14 20:23:05.778555] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.801 [2024-07-14 20:23:05.778575] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15d3ae0, cid 3, qid 0 00:23:16.801 [2024-07-14 20:23:05.778631] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:16.801 [2024-07-14 20:23:05.778637] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:16.801 [2024-07-14 20:23:05.778641] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:16.801 [2024-07-14 20:23:05.778645] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15d3ae0) on tqpair=0x1587580 00:23:16.801 [2024-07-14 20:23:05.778656] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:16.801 [2024-07-14 20:23:05.778660] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:16.801 [2024-07-14 20:23:05.778664] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1587580) 00:23:16.801 [2024-07-14 20:23:05.778671] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.801 [2024-07-14 20:23:05.778691] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15d3ae0, cid 3, qid 0 00:23:16.801 [2024-07-14 20:23:05.778746] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:16.801 [2024-07-14 20:23:05.778752] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:16.801 [2024-07-14 20:23:05.778756] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:16.801 [2024-07-14 20:23:05.778759] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15d3ae0) on tqpair=0x1587580 00:23:16.801 [2024-07-14 20:23:05.778770] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:16.801 [2024-07-14 20:23:05.778775] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:16.802 [2024-07-14 20:23:05.778778] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1587580) 00:23:16.802 [2024-07-14 20:23:05.778785] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.802 [2024-07-14 20:23:05.778805] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15d3ae0, cid 3, qid 0 00:23:16.802 [2024-07-14 20:23:05.781934] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:16.802 [2024-07-14 20:23:05.781958] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:16.802 [2024-07-14 20:23:05.781962] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:16.802 [2024-07-14 20:23:05.781967] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15d3ae0) on tqpair=0x1587580 00:23:16.802 [2024-07-14 20:23:05.781981] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:16.802 [2024-07-14 20:23:05.781987] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:16.802 [2024-07-14 20:23:05.781991] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1587580) 00:23:16.802 [2024-07-14 20:23:05.781999] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.802 [2024-07-14 20:23:05.782026] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15d3ae0, cid 3, qid 0 00:23:16.802 [2024-07-14 20:23:05.782106] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:16.802 [2024-07-14 20:23:05.782113] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:16.802 [2024-07-14 20:23:05.782117] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:16.802 [2024-07-14 20:23:05.782121] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15d3ae0) on tqpair=0x1587580 00:23:16.802 [2024-07-14 20:23:05.782130] nvme_ctrlr.c:1206:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:23:16.802 0 Kelvin (-273 Celsius) 00:23:16.802 Available Spare: 0% 00:23:16.802 Available Spare Threshold: 0% 00:23:16.802 Life Percentage Used: 0% 00:23:16.802 Data Units Read: 0 00:23:16.802 Data Units Written: 0 00:23:16.802 Host Read Commands: 0 00:23:16.802 Host Write Commands: 0 00:23:16.802 Controller Busy Time: 0 minutes 00:23:16.802 Power Cycles: 0 00:23:16.802 Power On Hours: 0 hours 00:23:16.802 Unsafe Shutdowns: 0 00:23:16.802 Unrecoverable Media Errors: 0 00:23:16.802 Lifetime Error Log Entries: 0 00:23:16.802 Warning Temperature Time: 0 minutes 00:23:16.802 Critical Temperature Time: 0 minutes 00:23:16.802 00:23:16.802 Number of Queues 00:23:16.802 ================ 00:23:16.802 Number of I/O Submission Queues: 127 00:23:16.802 Number of I/O Completion Queues: 127 00:23:16.802 00:23:16.802 Active Namespaces 00:23:16.802 ================= 00:23:16.802 Namespace ID:1 00:23:16.802 Error Recovery Timeout: Unlimited 00:23:16.802 Command Set Identifier: NVM (00h) 00:23:16.802 Deallocate: Supported 00:23:16.802 Deallocated/Unwritten Error: Not Supported 00:23:16.802 Deallocated Read Value: Unknown 00:23:16.802 Deallocate in Write Zeroes: Not Supported 00:23:16.802 Deallocated Guard Field: 0xFFFF 00:23:16.802 Flush: Supported 00:23:16.802 Reservation: Supported 00:23:16.802 Namespace Sharing Capabilities: Multiple Controllers 00:23:16.802 Size (in LBAs): 131072 (0GiB) 00:23:16.802 Capacity (in LBAs): 131072 (0GiB) 00:23:16.802 Utilization (in LBAs): 131072 (0GiB) 00:23:16.802 NGUID: ABCDEF0123456789ABCDEF0123456789 00:23:16.802 EUI64: ABCDEF0123456789 00:23:16.802 UUID: 4290adc5-7a7a-4c05-996b-e07c052d19b1 00:23:16.802 Thin Provisioning: Not Supported 00:23:16.802 Per-NS Atomic Units: Yes 00:23:16.802 Atomic Boundary Size (Normal): 0 00:23:16.802 Atomic Boundary Size (PFail): 0 00:23:16.802 Atomic Boundary Offset: 0 00:23:16.802 Maximum Single Source Range Length: 65535 00:23:16.802 Maximum Copy Length: 65535 00:23:16.802 Maximum Source Range Count: 1 00:23:16.802 NGUID/EUI64 Never Reused: No 00:23:16.802 Namespace Write Protected: No 00:23:16.802 Number of LBA Formats: 1 00:23:16.802 Current LBA Format: LBA Format #00 00:23:16.802 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:16.802 00:23:16.802 20:23:05 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:23:16.802 20:23:05 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:16.802 20:23:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.802 20:23:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:16.802 20:23:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.802 20:23:05 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:23:16.802 20:23:05 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:23:16.802 20:23:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:16.802 20:23:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:23:17.061 20:23:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:17.061 20:23:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:23:17.061 20:23:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:17.061 20:23:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:17.061 rmmod nvme_tcp 00:23:17.061 rmmod nvme_fabrics 00:23:17.061 rmmod nvme_keyring 00:23:17.061 20:23:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:17.061 20:23:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:23:17.061 20:23:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:23:17.061 20:23:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 104592 ']' 00:23:17.061 20:23:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 104592 00:23:17.061 20:23:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@946 -- # '[' -z 104592 ']' 00:23:17.061 20:23:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@950 -- # kill -0 104592 00:23:17.061 20:23:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # uname 00:23:17.061 20:23:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:17.061 20:23:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 104592 00:23:17.061 20:23:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:17.061 20:23:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:17.061 killing process with pid 104592 00:23:17.061 20:23:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@964 -- # echo 'killing process with pid 104592' 00:23:17.061 20:23:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@965 -- # kill 104592 00:23:17.061 20:23:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@970 -- # wait 104592 00:23:17.320 20:23:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:17.320 20:23:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:17.320 20:23:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:17.320 20:23:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:17.320 20:23:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:17.320 20:23:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:17.320 20:23:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:17.320 20:23:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:17.320 20:23:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:17.320 00:23:17.320 real 0m2.719s 00:23:17.320 user 0m7.516s 00:23:17.320 sys 0m0.703s 00:23:17.320 20:23:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:17.320 20:23:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:17.320 ************************************ 00:23:17.320 END TEST nvmf_identify 00:23:17.320 ************************************ 00:23:17.320 20:23:06 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:17.320 20:23:06 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:17.320 20:23:06 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:17.320 20:23:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:17.320 ************************************ 00:23:17.320 START TEST nvmf_perf 00:23:17.320 ************************************ 00:23:17.320 20:23:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:17.581 * Looking for test storage... 00:23:17.581 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:17.581 20:23:06 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:17.581 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:23:17.581 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:17.581 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:17.581 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:17.581 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:17.581 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:17.581 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:17.581 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:17.581 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:17.581 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:17.581 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:17.581 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:23:17.581 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:23:17.581 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:17.581 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:17.581 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:17.581 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:17.581 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:17.581 20:23:06 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:17.581 20:23:06 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:17.581 20:23:06 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:17.581 20:23:06 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.581 20:23:06 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.581 20:23:06 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.581 20:23:06 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:23:17.581 20:23:06 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.581 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:23:17.581 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:17.581 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:17.581 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:17.581 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:17.581 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:17.581 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:17.581 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:17.581 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:17.581 20:23:06 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:17.581 20:23:06 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:17.581 20:23:06 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:17.581 20:23:06 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:23:17.581 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:17.581 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:17.581 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:17.581 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:17.581 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:17.581 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:17.581 20:23:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:17.581 20:23:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:17.581 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:23:17.581 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:23:17.581 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:23:17.581 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:23:17.581 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:23:17.581 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@432 -- # nvmf_veth_init 00:23:17.581 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:17.581 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:17.581 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:17.581 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:17.581 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:17.581 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:17.581 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:17.581 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:17.581 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:17.581 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:17.581 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:17.581 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:17.581 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:17.581 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:17.581 Cannot find device "nvmf_tgt_br" 00:23:17.581 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # true 00:23:17.581 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:17.581 Cannot find device "nvmf_tgt_br2" 00:23:17.581 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # true 00:23:17.581 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:17.581 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:17.581 Cannot find device "nvmf_tgt_br" 00:23:17.581 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # true 00:23:17.581 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:17.581 Cannot find device "nvmf_tgt_br2" 00:23:17.582 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # true 00:23:17.582 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:17.582 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:17.582 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:17.582 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:17.582 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # true 00:23:17.582 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:17.582 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:17.582 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # true 00:23:17.582 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:17.582 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:17.582 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:17.582 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:17.582 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:17.873 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:17.873 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:17.873 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:17.873 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:17.873 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:17.873 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:17.873 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:17.873 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:17.873 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:17.873 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:17.873 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:17.873 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:17.873 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:17.873 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:17.873 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:17.873 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:17.873 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:17.873 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:17.873 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:17.873 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:17.873 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.118 ms 00:23:17.873 00:23:17.873 --- 10.0.0.2 ping statistics --- 00:23:17.873 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:17.873 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:23:17.873 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:17.873 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:17.873 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:23:17.873 00:23:17.873 --- 10.0.0.3 ping statistics --- 00:23:17.873 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:17.873 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:23:17.873 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:17.873 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:17.873 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:23:17.873 00:23:17.873 --- 10.0.0.1 ping statistics --- 00:23:17.873 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:17.873 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:23:17.873 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:17.873 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@433 -- # return 0 00:23:17.873 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:17.873 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:17.873 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:17.873 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:17.873 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:17.873 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:17.873 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:17.873 20:23:06 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:23:17.873 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:17.873 20:23:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:17.873 20:23:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:17.873 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=104817 00:23:17.873 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:17.873 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 104817 00:23:17.873 20:23:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@827 -- # '[' -z 104817 ']' 00:23:17.873 20:23:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:17.873 20:23:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:17.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:17.873 20:23:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:17.873 20:23:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:17.873 20:23:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:17.873 [2024-07-14 20:23:06.927834] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:17.873 [2024-07-14 20:23:06.928023] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:18.157 [2024-07-14 20:23:07.078346] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:18.157 [2024-07-14 20:23:07.179420] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:18.157 [2024-07-14 20:23:07.179475] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:18.157 [2024-07-14 20:23:07.179485] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:18.157 [2024-07-14 20:23:07.179501] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:18.157 [2024-07-14 20:23:07.179507] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:18.157 [2024-07-14 20:23:07.179650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:18.157 [2024-07-14 20:23:07.179834] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:18.158 [2024-07-14 20:23:07.180795] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:18.158 [2024-07-14 20:23:07.180860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:19.093 20:23:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:19.093 20:23:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@860 -- # return 0 00:23:19.093 20:23:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:19.093 20:23:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:19.093 20:23:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:19.093 20:23:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:19.093 20:23:07 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:23:19.093 20:23:07 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:23:19.351 20:23:08 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:23:19.351 20:23:08 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:23:19.610 20:23:08 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:23:19.610 20:23:08 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:19.869 20:23:08 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:23:19.869 20:23:08 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:23:19.869 20:23:08 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:23:19.869 20:23:08 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:23:19.869 20:23:08 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:20.127 [2024-07-14 20:23:09.117464] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:20.127 20:23:09 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:20.387 20:23:09 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:20.387 20:23:09 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:20.645 20:23:09 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:20.645 20:23:09 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:23:20.904 20:23:09 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:21.162 [2024-07-14 20:23:10.055623] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:21.162 20:23:10 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:21.422 20:23:10 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:23:21.422 20:23:10 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:23:21.422 20:23:10 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:23:21.422 20:23:10 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:23:22.359 Initializing NVMe Controllers 00:23:22.359 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:23:22.359 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:23:22.359 Initialization complete. Launching workers. 00:23:22.359 ======================================================== 00:23:22.359 Latency(us) 00:23:22.359 Device Information : IOPS MiB/s Average min max 00:23:22.359 PCIE (0000:00:10.0) NSID 1 from core 0: 21118.48 82.49 1515.65 337.43 9176.53 00:23:22.359 ======================================================== 00:23:22.359 Total : 21118.48 82.49 1515.65 337.43 9176.53 00:23:22.359 00:23:22.359 20:23:11 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:23.736 Initializing NVMe Controllers 00:23:23.736 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:23.736 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:23.736 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:23.736 Initialization complete. Launching workers. 00:23:23.736 ======================================================== 00:23:23.736 Latency(us) 00:23:23.736 Device Information : IOPS MiB/s Average min max 00:23:23.736 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3014.13 11.77 330.17 117.81 7287.26 00:23:23.736 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 122.88 0.48 8138.04 4992.62 12044.28 00:23:23.736 ======================================================== 00:23:23.736 Total : 3137.02 12.25 636.02 117.81 12044.28 00:23:23.736 00:23:23.736 20:23:12 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:25.115 Initializing NVMe Controllers 00:23:25.115 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:25.115 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:25.115 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:25.115 Initialization complete. Launching workers. 00:23:25.115 ======================================================== 00:23:25.115 Latency(us) 00:23:25.115 Device Information : IOPS MiB/s Average min max 00:23:25.115 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8731.23 34.11 3676.43 831.44 9876.97 00:23:25.115 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2664.77 10.41 12099.71 5142.92 32290.18 00:23:25.115 ======================================================== 00:23:25.115 Total : 11396.00 44.52 5646.08 831.44 32290.18 00:23:25.115 00:23:25.115 20:23:14 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:23:25.115 20:23:14 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:27.647 Initializing NVMe Controllers 00:23:27.647 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:27.647 Controller IO queue size 128, less than required. 00:23:27.647 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:27.647 Controller IO queue size 128, less than required. 00:23:27.647 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:27.647 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:27.647 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:27.647 Initialization complete. Launching workers. 00:23:27.647 ======================================================== 00:23:27.647 Latency(us) 00:23:27.647 Device Information : IOPS MiB/s Average min max 00:23:27.647 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1279.57 319.89 102292.55 70726.65 178991.97 00:23:27.647 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 563.81 140.95 231217.22 98270.94 340523.45 00:23:27.647 ======================================================== 00:23:27.647 Total : 1843.39 460.85 141725.04 70726.65 340523.45 00:23:27.647 00:23:27.647 20:23:16 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:23:27.906 Initializing NVMe Controllers 00:23:27.906 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:27.906 Controller IO queue size 128, less than required. 00:23:27.906 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:27.906 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:23:27.906 Controller IO queue size 128, less than required. 00:23:27.906 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:27.906 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:23:27.906 WARNING: Some requested NVMe devices were skipped 00:23:27.906 No valid NVMe controllers or AIO or URING devices found 00:23:27.906 20:23:16 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:23:30.463 Initializing NVMe Controllers 00:23:30.463 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:30.463 Controller IO queue size 128, less than required. 00:23:30.463 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:30.463 Controller IO queue size 128, less than required. 00:23:30.463 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:30.463 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:30.463 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:30.463 Initialization complete. Launching workers. 00:23:30.463 00:23:30.463 ==================== 00:23:30.463 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:23:30.463 TCP transport: 00:23:30.463 polls: 6966 00:23:30.463 idle_polls: 4642 00:23:30.463 sock_completions: 2324 00:23:30.463 nvme_completions: 4583 00:23:30.463 submitted_requests: 6906 00:23:30.463 queued_requests: 1 00:23:30.464 00:23:30.464 ==================== 00:23:30.464 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:23:30.464 TCP transport: 00:23:30.464 polls: 7128 00:23:30.464 idle_polls: 4774 00:23:30.464 sock_completions: 2354 00:23:30.464 nvme_completions: 4817 00:23:30.464 submitted_requests: 7246 00:23:30.464 queued_requests: 1 00:23:30.464 ======================================================== 00:23:30.464 Latency(us) 00:23:30.464 Device Information : IOPS MiB/s Average min max 00:23:30.464 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1145.34 286.34 114570.08 58224.32 195502.84 00:23:30.464 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1203.84 300.96 107055.32 55799.99 161487.57 00:23:30.464 ======================================================== 00:23:30.464 Total : 2349.18 587.29 110719.14 55799.99 195502.84 00:23:30.464 00:23:30.464 20:23:19 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:23:30.464 20:23:19 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:30.721 20:23:19 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:23:30.721 20:23:19 nvmf_tcp.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:00:10.0 ']' 00:23:30.721 20:23:19 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:23:30.980 20:23:19 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # ls_guid=d77857d5-f307-43f5-9fcc-3e7839281303 00:23:30.980 20:23:19 nvmf_tcp.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb d77857d5-f307-43f5-9fcc-3e7839281303 00:23:30.980 20:23:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1360 -- # local lvs_uuid=d77857d5-f307-43f5-9fcc-3e7839281303 00:23:30.980 20:23:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1361 -- # local lvs_info 00:23:30.980 20:23:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1362 -- # local fc 00:23:30.980 20:23:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1363 -- # local cs 00:23:30.980 20:23:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:31.238 20:23:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:23:31.238 { 00:23:31.238 "base_bdev": "Nvme0n1", 00:23:31.238 "block_size": 4096, 00:23:31.238 "cluster_size": 4194304, 00:23:31.238 "free_clusters": 1278, 00:23:31.239 "name": "lvs_0", 00:23:31.239 "total_data_clusters": 1278, 00:23:31.239 "uuid": "d77857d5-f307-43f5-9fcc-3e7839281303" 00:23:31.239 } 00:23:31.239 ]' 00:23:31.239 20:23:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="d77857d5-f307-43f5-9fcc-3e7839281303") .free_clusters' 00:23:31.497 20:23:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # fc=1278 00:23:31.497 20:23:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="d77857d5-f307-43f5-9fcc-3e7839281303") .cluster_size' 00:23:31.497 5112 00:23:31.497 20:23:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # cs=4194304 00:23:31.497 20:23:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # free_mb=5112 00:23:31.497 20:23:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # echo 5112 00:23:31.497 20:23:20 nvmf_tcp.nvmf_perf -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:23:31.497 20:23:20 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u d77857d5-f307-43f5-9fcc-3e7839281303 lbd_0 5112 00:23:31.756 20:23:20 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # lb_guid=9cfefd9a-4c65-407f-b8f0-7e38eaf0932e 00:23:31.756 20:23:20 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore 9cfefd9a-4c65-407f-b8f0-7e38eaf0932e lvs_n_0 00:23:32.014 20:23:20 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=2547873d-519b-4cd0-af18-8a8d2b3de617 00:23:32.014 20:23:20 nvmf_tcp.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 2547873d-519b-4cd0-af18-8a8d2b3de617 00:23:32.014 20:23:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1360 -- # local lvs_uuid=2547873d-519b-4cd0-af18-8a8d2b3de617 00:23:32.014 20:23:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1361 -- # local lvs_info 00:23:32.014 20:23:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1362 -- # local fc 00:23:32.014 20:23:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1363 -- # local cs 00:23:32.014 20:23:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:32.272 20:23:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:23:32.272 { 00:23:32.272 "base_bdev": "Nvme0n1", 00:23:32.272 "block_size": 4096, 00:23:32.272 "cluster_size": 4194304, 00:23:32.272 "free_clusters": 0, 00:23:32.272 "name": "lvs_0", 00:23:32.272 "total_data_clusters": 1278, 00:23:32.272 "uuid": "d77857d5-f307-43f5-9fcc-3e7839281303" 00:23:32.272 }, 00:23:32.272 { 00:23:32.272 "base_bdev": "9cfefd9a-4c65-407f-b8f0-7e38eaf0932e", 00:23:32.272 "block_size": 4096, 00:23:32.272 "cluster_size": 4194304, 00:23:32.272 "free_clusters": 1276, 00:23:32.272 "name": "lvs_n_0", 00:23:32.272 "total_data_clusters": 1276, 00:23:32.272 "uuid": "2547873d-519b-4cd0-af18-8a8d2b3de617" 00:23:32.272 } 00:23:32.272 ]' 00:23:32.272 20:23:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="2547873d-519b-4cd0-af18-8a8d2b3de617") .free_clusters' 00:23:32.272 20:23:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # fc=1276 00:23:32.272 20:23:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="2547873d-519b-4cd0-af18-8a8d2b3de617") .cluster_size' 00:23:32.272 5104 00:23:32.272 20:23:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # cs=4194304 00:23:32.272 20:23:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # free_mb=5104 00:23:32.272 20:23:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # echo 5104 00:23:32.272 20:23:21 nvmf_tcp.nvmf_perf -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:23:32.272 20:23:21 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 2547873d-519b-4cd0-af18-8a8d2b3de617 lbd_nest_0 5104 00:23:32.531 20:23:21 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=acfe113d-b030-4840-b127-4e26512b8a69 00:23:32.531 20:23:21 nvmf_tcp.nvmf_perf -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:32.789 20:23:21 nvmf_tcp.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:23:32.789 20:23:21 nvmf_tcp.nvmf_perf -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 acfe113d-b030-4840-b127-4e26512b8a69 00:23:33.048 20:23:22 nvmf_tcp.nvmf_perf -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:33.307 20:23:22 nvmf_tcp.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:23:33.307 20:23:22 nvmf_tcp.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:23:33.307 20:23:22 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:23:33.307 20:23:22 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:23:33.307 20:23:22 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:33.565 Initializing NVMe Controllers 00:23:33.565 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:33.565 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:23:33.565 WARNING: Some requested NVMe devices were skipped 00:23:33.565 No valid NVMe controllers or AIO or URING devices found 00:23:33.824 20:23:22 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:23:33.824 20:23:22 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:46.044 Initializing NVMe Controllers 00:23:46.044 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:46.044 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:46.044 Initialization complete. Launching workers. 00:23:46.044 ======================================================== 00:23:46.044 Latency(us) 00:23:46.044 Device Information : IOPS MiB/s Average min max 00:23:46.044 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 785.51 98.19 1272.23 409.53 8616.64 00:23:46.044 ======================================================== 00:23:46.044 Total : 785.51 98.19 1272.23 409.53 8616.64 00:23:46.044 00:23:46.044 20:23:32 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:23:46.044 20:23:32 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:23:46.044 20:23:32 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:46.044 Initializing NVMe Controllers 00:23:46.044 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:46.044 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:23:46.044 WARNING: Some requested NVMe devices were skipped 00:23:46.044 No valid NVMe controllers or AIO or URING devices found 00:23:46.044 20:23:33 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:23:46.044 20:23:33 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:56.025 Initializing NVMe Controllers 00:23:56.025 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:56.025 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:56.025 Initialization complete. Launching workers. 00:23:56.025 ======================================================== 00:23:56.025 Latency(us) 00:23:56.025 Device Information : IOPS MiB/s Average min max 00:23:56.025 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1020.80 127.60 31388.81 7989.48 266271.36 00:23:56.025 ======================================================== 00:23:56.025 Total : 1020.80 127.60 31388.81 7989.48 266271.36 00:23:56.025 00:23:56.025 20:23:43 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:23:56.025 20:23:43 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:23:56.025 20:23:43 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:56.025 Initializing NVMe Controllers 00:23:56.025 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:56.025 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:23:56.025 WARNING: Some requested NVMe devices were skipped 00:23:56.025 No valid NVMe controllers or AIO or URING devices found 00:23:56.025 20:23:43 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:23:56.025 20:23:43 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:06.021 Initializing NVMe Controllers 00:24:06.021 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:06.021 Controller IO queue size 128, less than required. 00:24:06.021 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:06.021 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:06.021 Initialization complete. Launching workers. 00:24:06.021 ======================================================== 00:24:06.021 Latency(us) 00:24:06.021 Device Information : IOPS MiB/s Average min max 00:24:06.021 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3914.50 489.31 32759.65 14039.94 67888.16 00:24:06.021 ======================================================== 00:24:06.021 Total : 3914.50 489.31 32759.65 14039.94 67888.16 00:24:06.021 00:24:06.021 20:23:54 nvmf_tcp.nvmf_perf -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:06.021 20:23:54 nvmf_tcp.nvmf_perf -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete acfe113d-b030-4840-b127-4e26512b8a69 00:24:06.021 20:23:54 nvmf_tcp.nvmf_perf -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:24:06.280 20:23:55 nvmf_tcp.nvmf_perf -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 9cfefd9a-4c65-407f-b8f0-7e38eaf0932e 00:24:06.280 20:23:55 nvmf_tcp.nvmf_perf -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:24:06.539 20:23:55 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:24:06.539 20:23:55 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:24:06.539 20:23:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:06.539 20:23:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:24:06.799 20:23:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:06.799 20:23:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:24:06.799 20:23:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:06.799 20:23:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:06.799 rmmod nvme_tcp 00:24:06.799 rmmod nvme_fabrics 00:24:06.799 rmmod nvme_keyring 00:24:06.799 20:23:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:06.799 20:23:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:24:06.799 20:23:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:24:06.799 20:23:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 104817 ']' 00:24:06.799 20:23:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 104817 00:24:06.799 20:23:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@946 -- # '[' -z 104817 ']' 00:24:06.799 20:23:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@950 -- # kill -0 104817 00:24:06.799 20:23:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # uname 00:24:06.799 20:23:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:06.799 20:23:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 104817 00:24:06.799 20:23:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:06.799 20:23:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:06.799 killing process with pid 104817 00:24:06.799 20:23:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 104817' 00:24:06.799 20:23:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@965 -- # kill 104817 00:24:06.799 20:23:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@970 -- # wait 104817 00:24:08.177 20:23:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:08.177 20:23:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:08.177 20:23:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:08.177 20:23:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:08.177 20:23:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:08.177 20:23:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:08.177 20:23:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:08.177 20:23:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:08.177 20:23:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:24:08.177 00:24:08.177 real 0m50.884s 00:24:08.177 user 3m10.004s 00:24:08.177 sys 0m10.744s 00:24:08.177 20:23:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:08.177 20:23:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:08.177 ************************************ 00:24:08.177 END TEST nvmf_perf 00:24:08.177 ************************************ 00:24:08.436 20:23:57 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:08.436 20:23:57 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:24:08.436 20:23:57 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:08.436 20:23:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:08.436 ************************************ 00:24:08.436 START TEST nvmf_fio_host 00:24:08.436 ************************************ 00:24:08.436 20:23:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:08.436 * Looking for test storage... 00:24:08.436 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:08.436 20:23:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:08.436 20:23:57 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:08.436 20:23:57 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:08.436 20:23:57 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:08.436 20:23:57 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.436 20:23:57 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.437 20:23:57 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.437 20:23:57 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:08.437 20:23:57 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.437 20:23:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:08.437 20:23:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:24:08.437 20:23:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:08.437 20:23:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:08.437 20:23:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:08.437 20:23:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:08.437 20:23:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:08.437 20:23:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:08.437 20:23:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:08.437 20:23:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:08.437 20:23:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:08.437 20:23:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:08.437 20:23:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:24:08.437 20:23:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:24:08.437 20:23:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:08.437 20:23:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:08.437 20:23:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:08.437 20:23:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:08.437 20:23:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:08.437 20:23:57 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:08.437 20:23:57 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:08.437 20:23:57 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:08.437 20:23:57 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.437 20:23:57 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.437 20:23:57 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.437 20:23:57 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:08.437 20:23:57 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.437 20:23:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:24:08.437 20:23:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:08.437 20:23:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:08.437 20:23:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:08.437 20:23:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:08.437 20:23:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:08.437 20:23:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:08.437 20:23:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:08.437 20:23:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:08.437 20:23:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:08.437 20:23:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:24:08.437 20:23:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:08.437 20:23:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:08.437 20:23:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:08.437 20:23:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:08.437 20:23:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:08.437 20:23:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:08.437 20:23:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:08.437 20:23:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:08.437 20:23:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:24:08.437 20:23:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:24:08.437 20:23:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:24:08.437 20:23:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:24:08.437 20:23:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:24:08.437 20:23:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:24:08.437 20:23:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:08.437 20:23:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:08.437 20:23:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:08.437 20:23:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:24:08.437 20:23:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:08.437 20:23:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:08.437 20:23:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:08.437 20:23:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:08.437 20:23:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:08.437 20:23:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:08.437 20:23:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:08.437 20:23:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:08.437 20:23:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:24:08.437 20:23:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:24:08.437 Cannot find device "nvmf_tgt_br" 00:24:08.437 20:23:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # true 00:24:08.437 20:23:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:24:08.437 Cannot find device "nvmf_tgt_br2" 00:24:08.437 20:23:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # true 00:24:08.437 20:23:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:24:08.437 20:23:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:24:08.437 Cannot find device "nvmf_tgt_br" 00:24:08.437 20:23:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # true 00:24:08.437 20:23:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:24:08.437 Cannot find device "nvmf_tgt_br2" 00:24:08.437 20:23:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # true 00:24:08.437 20:23:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:24:08.437 20:23:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:24:08.696 20:23:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:08.696 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:08.696 20:23:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:24:08.696 20:23:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:08.696 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:08.696 20:23:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:24:08.696 20:23:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:24:08.696 20:23:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:08.696 20:23:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:08.696 20:23:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:08.696 20:23:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:08.696 20:23:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:08.696 20:23:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:08.696 20:23:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:08.696 20:23:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:08.696 20:23:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:24:08.696 20:23:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:24:08.696 20:23:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:24:08.696 20:23:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:24:08.696 20:23:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:08.696 20:23:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:08.696 20:23:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:08.696 20:23:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:24:08.696 20:23:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:24:08.696 20:23:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:24:08.696 20:23:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:08.696 20:23:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:08.696 20:23:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:08.696 20:23:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:08.696 20:23:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:24:08.696 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:08.696 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:24:08.696 00:24:08.696 --- 10.0.0.2 ping statistics --- 00:24:08.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:08.696 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:24:08.696 20:23:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:24:08.696 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:08.696 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:24:08.696 00:24:08.696 --- 10.0.0.3 ping statistics --- 00:24:08.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:08.696 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:24:08.697 20:23:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:08.697 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:08.697 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:24:08.697 00:24:08.697 --- 10.0.0.1 ping statistics --- 00:24:08.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:08.697 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:24:08.697 20:23:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:08.697 20:23:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@433 -- # return 0 00:24:08.697 20:23:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:08.697 20:23:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:08.697 20:23:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:08.697 20:23:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:08.697 20:23:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:08.697 20:23:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:08.697 20:23:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:08.697 20:23:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:24:08.697 20:23:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:24:08.697 20:23:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:08.697 20:23:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.697 20:23:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=105765 00:24:08.697 20:23:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:08.697 20:23:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:08.697 20:23:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 105765 00:24:08.697 20:23:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@827 -- # '[' -z 105765 ']' 00:24:08.697 20:23:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:08.697 20:23:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:08.697 20:23:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:08.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:08.697 20:23:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:08.697 20:23:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.956 [2024-07-14 20:23:57.813733] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:24:08.956 [2024-07-14 20:23:57.813828] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:08.956 [2024-07-14 20:23:57.953212] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:08.956 [2024-07-14 20:23:58.040163] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:08.956 [2024-07-14 20:23:58.040239] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:08.956 [2024-07-14 20:23:58.040250] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:08.956 [2024-07-14 20:23:58.040258] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:08.956 [2024-07-14 20:23:58.040265] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:09.215 [2024-07-14 20:23:58.040462] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:09.215 [2024-07-14 20:23:58.040605] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:09.215 [2024-07-14 20:23:58.041506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:09.215 [2024-07-14 20:23:58.041561] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:09.782 20:23:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:09.782 20:23:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@860 -- # return 0 00:24:09.782 20:23:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:10.039 [2024-07-14 20:23:59.040003] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:10.039 20:23:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:24:10.039 20:23:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:10.039 20:23:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.296 20:23:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:24:10.555 Malloc1 00:24:10.555 20:23:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:10.814 20:23:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:11.073 20:23:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:11.332 [2024-07-14 20:24:00.170707] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:11.332 20:24:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:11.592 20:24:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:24:11.592 20:24:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:11.592 20:24:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:11.592 20:24:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:24:11.592 20:24:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:11.592 20:24:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:24:11.592 20:24:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:24:11.592 20:24:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:24:11.592 20:24:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:24:11.592 20:24:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:24:11.592 20:24:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:24:11.592 20:24:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:24:11.592 20:24:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:24:11.592 20:24:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:24:11.592 20:24:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:24:11.592 20:24:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:24:11.592 20:24:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:24:11.592 20:24:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:24:11.592 20:24:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:24:11.592 20:24:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:24:11.592 20:24:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:24:11.592 20:24:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:24:11.592 20:24:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:11.592 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:11.592 fio-3.35 00:24:11.592 Starting 1 thread 00:24:14.127 00:24:14.127 test: (groupid=0, jobs=1): err= 0: pid=105891: Sun Jul 14 20:24:02 2024 00:24:14.127 read: IOPS=9297, BW=36.3MiB/s (38.1MB/s)(72.9MiB/2006msec) 00:24:14.127 slat (nsec): min=1912, max=382450, avg=2412.87, stdev=4517.81 00:24:14.127 clat (usec): min=3211, max=12313, avg=7173.92, stdev=569.50 00:24:14.127 lat (usec): min=3267, max=12315, avg=7176.34, stdev=569.29 00:24:14.127 clat percentiles (usec): 00:24:14.127 | 1.00th=[ 6063], 5.00th=[ 6390], 10.00th=[ 6521], 20.00th=[ 6718], 00:24:14.127 | 30.00th=[ 6849], 40.00th=[ 6980], 50.00th=[ 7111], 60.00th=[ 7242], 00:24:14.127 | 70.00th=[ 7373], 80.00th=[ 7570], 90.00th=[ 7898], 95.00th=[ 8160], 00:24:14.127 | 99.00th=[ 8717], 99.50th=[ 8848], 99.90th=[10421], 99.95th=[11338], 00:24:14.127 | 99.99th=[12256] 00:24:14.127 bw ( KiB/s): min=35344, max=38352, per=99.94%, avg=37166.00, stdev=1408.76, samples=4 00:24:14.127 iops : min= 8836, max= 9588, avg=9291.50, stdev=352.19, samples=4 00:24:14.127 write: IOPS=9301, BW=36.3MiB/s (38.1MB/s)(72.9MiB/2006msec); 0 zone resets 00:24:14.127 slat (usec): min=2, max=243, avg= 2.47, stdev= 2.41 00:24:14.127 clat (usec): min=2390, max=11722, avg=6529.22, stdev=525.30 00:24:14.127 lat (usec): min=2405, max=11725, avg=6531.69, stdev=525.18 00:24:14.127 clat percentiles (usec): 00:24:14.127 | 1.00th=[ 5538], 5.00th=[ 5800], 10.00th=[ 5932], 20.00th=[ 6128], 00:24:14.127 | 30.00th=[ 6259], 40.00th=[ 6390], 50.00th=[ 6456], 60.00th=[ 6587], 00:24:14.127 | 70.00th=[ 6718], 80.00th=[ 6915], 90.00th=[ 7177], 95.00th=[ 7439], 00:24:14.127 | 99.00th=[ 7898], 99.50th=[ 8094], 99.90th=[10028], 99.95th=[11338], 00:24:14.127 | 99.99th=[11731] 00:24:14.127 bw ( KiB/s): min=36000, max=38192, per=99.98%, avg=37198.00, stdev=970.26, samples=4 00:24:14.127 iops : min= 9000, max= 9548, avg=9299.50, stdev=242.57, samples=4 00:24:14.127 lat (msec) : 4=0.15%, 10=99.73%, 20=0.12% 00:24:14.127 cpu : usr=67.83%, sys=23.64%, ctx=7, majf=0, minf=6 00:24:14.127 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:24:14.127 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:14.127 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:14.127 issued rwts: total=18650,18659,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:14.127 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:14.127 00:24:14.127 Run status group 0 (all jobs): 00:24:14.127 READ: bw=36.3MiB/s (38.1MB/s), 36.3MiB/s-36.3MiB/s (38.1MB/s-38.1MB/s), io=72.9MiB (76.4MB), run=2006-2006msec 00:24:14.127 WRITE: bw=36.3MiB/s (38.1MB/s), 36.3MiB/s-36.3MiB/s (38.1MB/s-38.1MB/s), io=72.9MiB (76.4MB), run=2006-2006msec 00:24:14.127 20:24:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:14.127 20:24:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:14.127 20:24:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:24:14.127 20:24:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:14.127 20:24:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:24:14.127 20:24:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:24:14.127 20:24:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:24:14.127 20:24:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:24:14.127 20:24:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:24:14.127 20:24:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:24:14.127 20:24:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:24:14.127 20:24:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:24:14.127 20:24:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:24:14.127 20:24:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:24:14.127 20:24:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:24:14.127 20:24:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:24:14.127 20:24:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:24:14.127 20:24:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:24:14.127 20:24:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:24:14.127 20:24:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:24:14.127 20:24:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:24:14.127 20:24:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:14.127 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:24:14.127 fio-3.35 00:24:14.127 Starting 1 thread 00:24:16.653 00:24:16.653 test: (groupid=0, jobs=1): err= 0: pid=105940: Sun Jul 14 20:24:05 2024 00:24:16.653 read: IOPS=8076, BW=126MiB/s (132MB/s)(253MiB/2007msec) 00:24:16.653 slat (usec): min=2, max=138, avg= 3.68, stdev= 2.65 00:24:16.653 clat (usec): min=2297, max=18140, avg=9340.53, stdev=2231.01 00:24:16.653 lat (usec): min=2301, max=18145, avg=9344.22, stdev=2231.09 00:24:16.653 clat percentiles (usec): 00:24:16.653 | 1.00th=[ 4752], 5.00th=[ 5800], 10.00th=[ 6390], 20.00th=[ 7242], 00:24:16.653 | 30.00th=[ 8029], 40.00th=[ 8717], 50.00th=[ 9372], 60.00th=[10159], 00:24:16.653 | 70.00th=[10683], 80.00th=[11076], 90.00th=[12125], 95.00th=[13042], 00:24:16.653 | 99.00th=[14746], 99.50th=[15270], 99.90th=[16188], 99.95th=[16319], 00:24:16.653 | 99.99th=[17433] 00:24:16.653 bw ( KiB/s): min=54624, max=74688, per=51.08%, avg=66000.00, stdev=8336.82, samples=4 00:24:16.653 iops : min= 3414, max= 4668, avg=4125.00, stdev=521.05, samples=4 00:24:16.653 write: IOPS=4877, BW=76.2MiB/s (79.9MB/s)(135MiB/1777msec); 0 zone resets 00:24:16.653 slat (usec): min=31, max=372, avg=36.75, stdev=10.96 00:24:16.653 clat (usec): min=6057, max=19869, avg=11497.84, stdev=2232.84 00:24:16.653 lat (usec): min=6108, max=19902, avg=11534.59, stdev=2233.86 00:24:16.653 clat percentiles (usec): 00:24:16.653 | 1.00th=[ 7504], 5.00th=[ 8356], 10.00th=[ 8848], 20.00th=[ 9634], 00:24:16.653 | 30.00th=[10159], 40.00th=[10683], 50.00th=[11207], 60.00th=[11731], 00:24:16.653 | 70.00th=[12518], 80.00th=[13304], 90.00th=[14615], 95.00th=[15664], 00:24:16.653 | 99.00th=[17433], 99.50th=[17957], 99.90th=[19268], 99.95th=[19530], 00:24:16.653 | 99.99th=[19792] 00:24:16.653 bw ( KiB/s): min=56416, max=78048, per=88.28%, avg=68896.00, stdev=9037.87, samples=4 00:24:16.653 iops : min= 3526, max= 4878, avg=4306.00, stdev=564.87, samples=4 00:24:16.653 lat (msec) : 4=0.16%, 10=47.54%, 20=52.30% 00:24:16.653 cpu : usr=70.64%, sys=19.29%, ctx=11, majf=0, minf=2 00:24:16.653 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:24:16.653 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:16.653 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:16.653 issued rwts: total=16209,8668,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:16.653 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:16.653 00:24:16.653 Run status group 0 (all jobs): 00:24:16.653 READ: bw=126MiB/s (132MB/s), 126MiB/s-126MiB/s (132MB/s-132MB/s), io=253MiB (266MB), run=2007-2007msec 00:24:16.653 WRITE: bw=76.2MiB/s (79.9MB/s), 76.2MiB/s-76.2MiB/s (79.9MB/s-79.9MB/s), io=135MiB (142MB), run=1777-1777msec 00:24:16.653 20:24:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:16.653 20:24:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:24:16.653 20:24:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:24:16.653 20:24:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:24:16.653 20:24:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1509 -- # bdfs=() 00:24:16.653 20:24:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1509 -- # local bdfs 00:24:16.653 20:24:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:24:16.653 20:24:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:24:16.653 20:24:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:24:16.653 20:24:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1511 -- # (( 2 == 0 )) 00:24:16.653 20:24:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:24:16.653 20:24:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 -i 10.0.0.2 00:24:17.252 Nvme0n1 00:24:17.252 20:24:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:24:17.252 20:24:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=e71c17db-fc75-4086-b505-ea8f9a1c40cd 00:24:17.252 20:24:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb e71c17db-fc75-4086-b505-ea8f9a1c40cd 00:24:17.252 20:24:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # local lvs_uuid=e71c17db-fc75-4086-b505-ea8f9a1c40cd 00:24:17.252 20:24:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1361 -- # local lvs_info 00:24:17.252 20:24:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1362 -- # local fc 00:24:17.252 20:24:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1363 -- # local cs 00:24:17.252 20:24:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:24:17.510 20:24:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:24:17.510 { 00:24:17.510 "base_bdev": "Nvme0n1", 00:24:17.510 "block_size": 4096, 00:24:17.510 "cluster_size": 1073741824, 00:24:17.510 "free_clusters": 4, 00:24:17.510 "name": "lvs_0", 00:24:17.510 "total_data_clusters": 4, 00:24:17.510 "uuid": "e71c17db-fc75-4086-b505-ea8f9a1c40cd" 00:24:17.510 } 00:24:17.510 ]' 00:24:17.511 20:24:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="e71c17db-fc75-4086-b505-ea8f9a1c40cd") .free_clusters' 00:24:17.511 20:24:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # fc=4 00:24:17.511 20:24:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="e71c17db-fc75-4086-b505-ea8f9a1c40cd") .cluster_size' 00:24:17.769 4096 00:24:17.769 20:24:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # cs=1073741824 00:24:17.769 20:24:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # free_mb=4096 00:24:17.769 20:24:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # echo 4096 00:24:17.769 20:24:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 00:24:17.769 53437a58-1cd0-447f-9dc6-03d27c06b379 00:24:17.769 20:24:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:24:18.027 20:24:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:24:18.285 20:24:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:18.543 20:24:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:18.543 20:24:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:18.543 20:24:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:24:18.543 20:24:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:18.543 20:24:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:24:18.543 20:24:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:24:18.543 20:24:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:24:18.543 20:24:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:24:18.543 20:24:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:24:18.543 20:24:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:24:18.543 20:24:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:24:18.543 20:24:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:24:18.543 20:24:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:24:18.543 20:24:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:24:18.543 20:24:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:24:18.543 20:24:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:24:18.543 20:24:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:24:18.543 20:24:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:24:18.543 20:24:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:24:18.543 20:24:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:24:18.543 20:24:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:24:18.543 20:24:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:18.801 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:18.801 fio-3.35 00:24:18.801 Starting 1 thread 00:24:21.329 00:24:21.329 test: (groupid=0, jobs=1): err= 0: pid=106092: Sun Jul 14 20:24:09 2024 00:24:21.329 read: IOPS=5992, BW=23.4MiB/s (24.5MB/s)(47.0MiB/2008msec) 00:24:21.329 slat (nsec): min=1844, max=357212, avg=2890.31, stdev=5086.98 00:24:21.329 clat (usec): min=4501, max=19117, avg=11202.88, stdev=946.38 00:24:21.329 lat (usec): min=4511, max=19119, avg=11205.77, stdev=946.08 00:24:21.329 clat percentiles (usec): 00:24:21.329 | 1.00th=[ 9241], 5.00th=[ 9765], 10.00th=[10159], 20.00th=[10421], 00:24:21.329 | 30.00th=[10683], 40.00th=[10945], 50.00th=[11207], 60.00th=[11338], 00:24:21.329 | 70.00th=[11600], 80.00th=[11863], 90.00th=[12256], 95.00th=[12649], 00:24:21.329 | 99.00th=[13304], 99.50th=[13960], 99.90th=[17695], 99.95th=[17957], 00:24:21.329 | 99.99th=[19006] 00:24:21.329 bw ( KiB/s): min=23008, max=24448, per=99.77%, avg=23914.00, stdev=652.98, samples=4 00:24:21.329 iops : min= 5752, max= 6112, avg=5978.50, stdev=163.25, samples=4 00:24:21.329 write: IOPS=5974, BW=23.3MiB/s (24.5MB/s)(46.9MiB/2008msec); 0 zone resets 00:24:21.329 slat (nsec): min=1950, max=342822, avg=3078.87, stdev=4618.95 00:24:21.329 clat (usec): min=2746, max=17619, avg=10114.71, stdev=849.20 00:24:21.329 lat (usec): min=2760, max=17621, avg=10117.78, stdev=848.99 00:24:21.329 clat percentiles (usec): 00:24:21.329 | 1.00th=[ 8291], 5.00th=[ 8848], 10.00th=[ 9110], 20.00th=[ 9503], 00:24:21.329 | 30.00th=[ 9634], 40.00th=[ 9896], 50.00th=[10159], 60.00th=[10290], 00:24:21.329 | 70.00th=[10552], 80.00th=[10814], 90.00th=[11076], 95.00th=[11338], 00:24:21.329 | 99.00th=[11994], 99.50th=[12256], 99.90th=[15139], 99.95th=[16581], 00:24:21.329 | 99.99th=[17695] 00:24:21.329 bw ( KiB/s): min=23744, max=24064, per=99.97%, avg=23890.00, stdev=131.68, samples=4 00:24:21.329 iops : min= 5936, max= 6016, avg=5972.50, stdev=32.92, samples=4 00:24:21.329 lat (msec) : 4=0.04%, 10=26.26%, 20=73.70% 00:24:21.329 cpu : usr=70.20%, sys=22.22%, ctx=4, majf=0, minf=6 00:24:21.329 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:24:21.329 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.329 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:21.329 issued rwts: total=12033,11996,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.329 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:21.329 00:24:21.329 Run status group 0 (all jobs): 00:24:21.329 READ: bw=23.4MiB/s (24.5MB/s), 23.4MiB/s-23.4MiB/s (24.5MB/s-24.5MB/s), io=47.0MiB (49.3MB), run=2008-2008msec 00:24:21.329 WRITE: bw=23.3MiB/s (24.5MB/s), 23.3MiB/s-23.3MiB/s (24.5MB/s-24.5MB/s), io=46.9MiB (49.1MB), run=2008-2008msec 00:24:21.329 20:24:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:21.329 20:24:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:24:21.587 20:24:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=7a939881-f560-482e-9044-4230f67d6def 00:24:21.587 20:24:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 7a939881-f560-482e-9044-4230f67d6def 00:24:21.587 20:24:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # local lvs_uuid=7a939881-f560-482e-9044-4230f67d6def 00:24:21.587 20:24:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1361 -- # local lvs_info 00:24:21.587 20:24:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1362 -- # local fc 00:24:21.587 20:24:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1363 -- # local cs 00:24:21.587 20:24:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:24:21.844 20:24:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:24:21.844 { 00:24:21.844 "base_bdev": "Nvme0n1", 00:24:21.844 "block_size": 4096, 00:24:21.844 "cluster_size": 1073741824, 00:24:21.844 "free_clusters": 0, 00:24:21.844 "name": "lvs_0", 00:24:21.844 "total_data_clusters": 4, 00:24:21.844 "uuid": "e71c17db-fc75-4086-b505-ea8f9a1c40cd" 00:24:21.844 }, 00:24:21.844 { 00:24:21.844 "base_bdev": "53437a58-1cd0-447f-9dc6-03d27c06b379", 00:24:21.844 "block_size": 4096, 00:24:21.844 "cluster_size": 4194304, 00:24:21.844 "free_clusters": 1022, 00:24:21.844 "name": "lvs_n_0", 00:24:21.844 "total_data_clusters": 1022, 00:24:21.844 "uuid": "7a939881-f560-482e-9044-4230f67d6def" 00:24:21.844 } 00:24:21.844 ]' 00:24:21.844 20:24:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="7a939881-f560-482e-9044-4230f67d6def") .free_clusters' 00:24:21.844 20:24:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # fc=1022 00:24:21.845 20:24:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="7a939881-f560-482e-9044-4230f67d6def") .cluster_size' 00:24:21.845 4088 00:24:21.845 20:24:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # cs=4194304 00:24:21.845 20:24:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # free_mb=4088 00:24:21.845 20:24:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # echo 4088 00:24:21.845 20:24:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:24:22.103 4b6929f7-91b2-49f3-9111-fc51be3acea7 00:24:22.103 20:24:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:24:22.360 20:24:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:24:22.360 20:24:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:24:22.618 20:24:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:22.618 20:24:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:22.618 20:24:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:24:22.618 20:24:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:22.618 20:24:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:24:22.618 20:24:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:24:22.618 20:24:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:24:22.618 20:24:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:24:22.618 20:24:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:24:22.618 20:24:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:24:22.618 20:24:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:24:22.618 20:24:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:24:22.618 20:24:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:24:22.618 20:24:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:24:22.618 20:24:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:24:22.618 20:24:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:24:22.618 20:24:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:24:22.618 20:24:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:24:22.618 20:24:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:24:22.618 20:24:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:24:22.618 20:24:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:24:22.618 20:24:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:22.875 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:22.875 fio-3.35 00:24:22.875 Starting 1 thread 00:24:25.404 00:24:25.404 test: (groupid=0, jobs=1): err= 0: pid=106207: Sun Jul 14 20:24:14 2024 00:24:25.404 read: IOPS=4900, BW=19.1MiB/s (20.1MB/s)(38.5MiB/2013msec) 00:24:25.404 slat (nsec): min=1924, max=331103, avg=2832.27, stdev=4831.31 00:24:25.404 clat (usec): min=5193, max=30081, avg=13768.01, stdev=1635.17 00:24:25.404 lat (usec): min=5202, max=30083, avg=13770.85, stdev=1634.92 00:24:25.404 clat percentiles (usec): 00:24:25.404 | 1.00th=[11076], 5.00th=[11731], 10.00th=[12125], 20.00th=[12649], 00:24:25.404 | 30.00th=[12911], 40.00th=[13304], 50.00th=[13566], 60.00th=[13829], 00:24:25.404 | 70.00th=[14222], 80.00th=[14746], 90.00th=[15533], 95.00th=[16188], 00:24:25.404 | 99.00th=[19530], 99.50th=[21103], 99.90th=[26346], 99.95th=[28443], 00:24:25.404 | 99.99th=[30016] 00:24:25.404 bw ( KiB/s): min=18680, max=20456, per=99.94%, avg=19590.25, stdev=794.13, samples=4 00:24:25.404 iops : min= 4670, max= 5114, avg=4897.50, stdev=198.57, samples=4 00:24:25.404 write: IOPS=4892, BW=19.1MiB/s (20.0MB/s)(38.5MiB/2013msec); 0 zone resets 00:24:25.404 slat (usec): min=2, max=218, avg= 2.95, stdev= 3.42 00:24:25.404 clat (usec): min=2342, max=28207, avg=12291.49, stdev=1518.99 00:24:25.404 lat (usec): min=2353, max=28210, avg=12294.44, stdev=1518.79 00:24:25.404 clat percentiles (usec): 00:24:25.404 | 1.00th=[ 9765], 5.00th=[10552], 10.00th=[10814], 20.00th=[11338], 00:24:25.404 | 30.00th=[11600], 40.00th=[11863], 50.00th=[12125], 60.00th=[12387], 00:24:25.404 | 70.00th=[12649], 80.00th=[13042], 90.00th=[13698], 95.00th=[14353], 00:24:25.404 | 99.00th=[17695], 99.50th=[19792], 99.90th=[26084], 99.95th=[27919], 00:24:25.404 | 99.99th=[28181] 00:24:25.404 bw ( KiB/s): min=18256, max=20127, per=99.96%, avg=19563.75, stdev=888.17, samples=4 00:24:25.404 iops : min= 4564, max= 5031, avg=4890.75, stdev=221.88, samples=4 00:24:25.404 lat (msec) : 4=0.04%, 10=0.99%, 20=98.43%, 50=0.54% 00:24:25.404 cpu : usr=72.37%, sys=21.47%, ctx=4, majf=0, minf=6 00:24:25.404 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:24:25.404 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:25.404 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:25.404 issued rwts: total=9865,9849,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:25.404 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:25.404 00:24:25.404 Run status group 0 (all jobs): 00:24:25.404 READ: bw=19.1MiB/s (20.1MB/s), 19.1MiB/s-19.1MiB/s (20.1MB/s-20.1MB/s), io=38.5MiB (40.4MB), run=2013-2013msec 00:24:25.404 WRITE: bw=19.1MiB/s (20.0MB/s), 19.1MiB/s-19.1MiB/s (20.0MB/s-20.0MB/s), io=38.5MiB (40.3MB), run=2013-2013msec 00:24:25.404 20:24:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:24:25.404 20:24:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:24:25.404 20:24:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:24:25.661 20:24:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:24:25.919 20:24:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:24:26.176 20:24:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:24:26.434 20:24:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:24:26.692 20:24:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:26.692 20:24:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:24:26.692 20:24:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:24:26.692 20:24:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:26.692 20:24:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:24:26.692 20:24:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:26.692 20:24:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:24:26.692 20:24:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:26.692 20:24:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:26.692 rmmod nvme_tcp 00:24:26.692 rmmod nvme_fabrics 00:24:26.692 rmmod nvme_keyring 00:24:26.692 20:24:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:26.692 20:24:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:24:26.692 20:24:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:24:26.692 20:24:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 105765 ']' 00:24:26.692 20:24:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 105765 00:24:26.692 20:24:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@946 -- # '[' -z 105765 ']' 00:24:26.692 20:24:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@950 -- # kill -0 105765 00:24:26.692 20:24:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # uname 00:24:26.692 20:24:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:26.692 20:24:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 105765 00:24:26.692 killing process with pid 105765 00:24:26.692 20:24:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:26.692 20:24:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:26.692 20:24:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 105765' 00:24:26.692 20:24:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@965 -- # kill 105765 00:24:26.692 20:24:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@970 -- # wait 105765 00:24:26.951 20:24:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:26.951 20:24:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:26.951 20:24:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:26.951 20:24:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:26.951 20:24:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:26.951 20:24:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:26.951 20:24:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:26.951 20:24:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:26.951 20:24:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:24:26.951 00:24:26.951 real 0m18.720s 00:24:26.951 user 1m22.219s 00:24:26.951 sys 0m4.480s 00:24:26.952 20:24:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:26.952 20:24:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.952 ************************************ 00:24:26.952 END TEST nvmf_fio_host 00:24:26.952 ************************************ 00:24:27.211 20:24:16 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:27.211 20:24:16 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:24:27.211 20:24:16 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:27.211 20:24:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:27.211 ************************************ 00:24:27.211 START TEST nvmf_failover 00:24:27.211 ************************************ 00:24:27.211 20:24:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:27.211 * Looking for test storage... 00:24:27.211 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:27.211 20:24:16 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:27.211 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:24:27.211 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:27.211 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:27.211 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:27.211 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:27.211 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:27.211 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:27.211 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:27.211 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:27.211 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:27.211 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:27.211 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:24:27.211 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:24:27.211 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:27.211 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:27.211 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:27.211 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:27.211 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:27.211 20:24:16 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:27.211 20:24:16 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:27.211 20:24:16 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:27.211 20:24:16 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:27.211 20:24:16 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:27.211 20:24:16 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:27.211 20:24:16 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:24:27.211 20:24:16 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:27.211 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:24:27.211 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:27.211 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:27.211 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:27.211 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:27.211 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:27.211 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:27.211 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:27.211 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:27.211 20:24:16 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:27.211 20:24:16 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:27.211 20:24:16 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:27.211 20:24:16 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:27.211 20:24:16 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:24:27.211 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:27.211 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:27.211 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:27.211 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:27.211 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:27.211 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:27.211 20:24:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:27.211 20:24:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:27.211 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:24:27.212 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:24:27.212 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:24:27.212 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:24:27.212 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:24:27.212 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@432 -- # nvmf_veth_init 00:24:27.212 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:27.212 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:27.212 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:27.212 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:24:27.212 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:27.212 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:27.212 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:27.212 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:27.212 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:27.212 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:27.212 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:27.212 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:27.212 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:24:27.212 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:24:27.212 Cannot find device "nvmf_tgt_br" 00:24:27.212 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # true 00:24:27.212 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:24:27.212 Cannot find device "nvmf_tgt_br2" 00:24:27.212 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # true 00:24:27.212 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:24:27.212 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:24:27.212 Cannot find device "nvmf_tgt_br" 00:24:27.212 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # true 00:24:27.212 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:24:27.212 Cannot find device "nvmf_tgt_br2" 00:24:27.212 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # true 00:24:27.212 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:24:27.212 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:24:27.471 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:27.471 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:27.471 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # true 00:24:27.471 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:27.471 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:27.471 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # true 00:24:27.471 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:24:27.471 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:27.471 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:27.471 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:27.471 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:27.471 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:27.471 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:27.471 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:27.471 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:27.471 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:24:27.471 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:24:27.471 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:24:27.471 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:24:27.471 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:27.471 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:27.471 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:27.471 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:24:27.471 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:24:27.471 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:24:27.471 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:27.471 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:27.471 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:27.471 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:27.471 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:24:27.471 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:27.471 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:24:27.471 00:24:27.471 --- 10.0.0.2 ping statistics --- 00:24:27.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:27.471 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:24:27.471 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:24:27.471 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:27.471 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.081 ms 00:24:27.471 00:24:27.471 --- 10.0.0.3 ping statistics --- 00:24:27.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:27.471 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:24:27.471 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:27.471 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:27.471 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:24:27.471 00:24:27.471 --- 10.0.0.1 ping statistics --- 00:24:27.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:27.471 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:24:27.471 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:27.471 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@433 -- # return 0 00:24:27.471 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:27.471 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:27.471 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:27.471 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:27.471 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:27.471 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:27.471 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:27.471 20:24:16 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:24:27.471 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:27.471 20:24:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:27.471 20:24:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:27.471 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=106481 00:24:27.471 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 106481 00:24:27.471 20:24:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 106481 ']' 00:24:27.471 20:24:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:27.471 20:24:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:27.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:27.471 20:24:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:27.471 20:24:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:27.471 20:24:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:27.471 20:24:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:27.730 [2024-07-14 20:24:16.588210] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:24:27.730 [2024-07-14 20:24:16.588351] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:27.730 [2024-07-14 20:24:16.733002] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:27.989 [2024-07-14 20:24:16.828275] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:27.989 [2024-07-14 20:24:16.828352] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:27.989 [2024-07-14 20:24:16.828363] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:27.989 [2024-07-14 20:24:16.828370] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:27.989 [2024-07-14 20:24:16.828377] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:27.989 [2024-07-14 20:24:16.828531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:27.989 [2024-07-14 20:24:16.829258] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:27.989 [2024-07-14 20:24:16.829304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:28.557 20:24:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:28.557 20:24:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:24:28.557 20:24:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:28.557 20:24:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:28.557 20:24:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:28.557 20:24:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:28.557 20:24:17 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:28.816 [2024-07-14 20:24:17.793923] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:28.816 20:24:17 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:29.074 Malloc0 00:24:29.074 20:24:18 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:29.333 20:24:18 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:29.592 20:24:18 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:29.850 [2024-07-14 20:24:18.835711] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:29.850 20:24:18 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:30.108 [2024-07-14 20:24:19.111788] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:30.108 20:24:19 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:30.367 [2024-07-14 20:24:19.372041] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:30.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:30.367 20:24:19 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=106587 00:24:30.367 20:24:19 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:24:30.367 20:24:19 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:30.367 20:24:19 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 106587 /var/tmp/bdevperf.sock 00:24:30.367 20:24:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 106587 ']' 00:24:30.367 20:24:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:30.367 20:24:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:30.367 20:24:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:30.367 20:24:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:30.367 20:24:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:31.743 20:24:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:31.743 20:24:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:24:31.743 20:24:20 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:31.743 NVMe0n1 00:24:31.743 20:24:20 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:32.001 00:24:32.001 20:24:20 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=106635 00:24:32.001 20:24:20 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:32.001 20:24:20 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:24:32.937 20:24:21 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:33.195 20:24:22 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:24:36.506 20:24:25 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:36.506 00:24:36.506 20:24:25 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:36.765 20:24:25 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:24:40.050 20:24:28 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:40.050 [2024-07-14 20:24:29.081029] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:40.050 20:24:29 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:24:41.426 20:24:30 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:41.426 [2024-07-14 20:24:30.334158] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7980e0 is same with the state(5) to be set 00:24:41.426 [2024-07-14 20:24:30.334243] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7980e0 is same with the state(5) to be set 00:24:41.426 [2024-07-14 20:24:30.334255] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7980e0 is same with the state(5) to be set 00:24:41.426 [2024-07-14 20:24:30.334263] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7980e0 is same with the state(5) to be set 00:24:41.426 [2024-07-14 20:24:30.334270] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7980e0 is same with the state(5) to be set 00:24:41.426 [2024-07-14 20:24:30.334281] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7980e0 is same with the state(5) to be set 00:24:41.426 [2024-07-14 20:24:30.334290] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7980e0 is same with the state(5) to be set 00:24:41.426 [2024-07-14 20:24:30.334298] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7980e0 is same with the state(5) to be set 00:24:41.426 [2024-07-14 20:24:30.334306] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7980e0 is same with the state(5) to be set 00:24:41.426 [2024-07-14 20:24:30.334313] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7980e0 is same with the state(5) to be set 00:24:41.426 [2024-07-14 20:24:30.334321] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7980e0 is same with the state(5) to be set 00:24:41.426 [2024-07-14 20:24:30.334328] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7980e0 is same with the state(5) to be set 00:24:41.426 [2024-07-14 20:24:30.334336] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7980e0 is same with the state(5) to be set 00:24:41.426 [2024-07-14 20:24:30.334344] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7980e0 is same with the state(5) to be set 00:24:41.426 [2024-07-14 20:24:30.334352] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7980e0 is same with the state(5) to be set 00:24:41.426 [2024-07-14 20:24:30.334359] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7980e0 is same with the state(5) to be set 00:24:41.426 [2024-07-14 20:24:30.334367] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7980e0 is same with the state(5) to be set 00:24:41.426 [2024-07-14 20:24:30.334374] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7980e0 is same with the state(5) to be set 00:24:41.426 [2024-07-14 20:24:30.334382] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7980e0 is same with the state(5) to be set 00:24:41.426 [2024-07-14 20:24:30.334389] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7980e0 is same with the state(5) to be set 00:24:41.426 [2024-07-14 20:24:30.334396] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7980e0 is same with the state(5) to be set 00:24:41.426 [2024-07-14 20:24:30.334403] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7980e0 is same with the state(5) to be set 00:24:41.426 [2024-07-14 20:24:30.334410] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7980e0 is same with the state(5) to be set 00:24:41.426 [2024-07-14 20:24:30.334417] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7980e0 is same with the state(5) to be set 00:24:41.426 [2024-07-14 20:24:30.334424] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7980e0 is same with the state(5) to be set 00:24:41.426 [2024-07-14 20:24:30.334431] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7980e0 is same with the state(5) to be set 00:24:41.426 [2024-07-14 20:24:30.334438] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7980e0 is same with the state(5) to be set 00:24:41.426 [2024-07-14 20:24:30.334445] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7980e0 is same with the state(5) to be set 00:24:41.426 [2024-07-14 20:24:30.334453] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7980e0 is same with the state(5) to be set 00:24:41.426 [2024-07-14 20:24:30.334461] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7980e0 is same with the state(5) to be set 00:24:41.426 [2024-07-14 20:24:30.334468] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7980e0 is same with the state(5) to be set 00:24:41.426 [2024-07-14 20:24:30.334475] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7980e0 is same with the state(5) to be set 00:24:41.426 [2024-07-14 20:24:30.334484] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7980e0 is same with the state(5) to be set 00:24:41.426 [2024-07-14 20:24:30.334493] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7980e0 is same with the state(5) to be set 00:24:41.426 [2024-07-14 20:24:30.334500] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7980e0 is same with the state(5) to be set 00:24:41.426 [2024-07-14 20:24:30.334508] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7980e0 is same with the state(5) to be set 00:24:41.426 [2024-07-14 20:24:30.334516] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7980e0 is same with the state(5) to be set 00:24:41.426 [2024-07-14 20:24:30.334523] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7980e0 is same with the state(5) to be set 00:24:41.426 [2024-07-14 20:24:30.334531] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7980e0 is same with the state(5) to be set 00:24:41.426 [2024-07-14 20:24:30.334538] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7980e0 is same with the state(5) to be set 00:24:41.426 [2024-07-14 20:24:30.334545] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7980e0 is same with the state(5) to be set 00:24:41.426 [2024-07-14 20:24:30.334554] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7980e0 is same with the state(5) to be set 00:24:41.426 [2024-07-14 20:24:30.334561] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7980e0 is same with the state(5) to be set 00:24:41.426 [2024-07-14 20:24:30.334569] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7980e0 is same with the state(5) to be set 00:24:41.426 [2024-07-14 20:24:30.334576] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7980e0 is same with the state(5) to be set 00:24:41.426 [2024-07-14 20:24:30.334584] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7980e0 is same with the state(5) to be set 00:24:41.426 [2024-07-14 20:24:30.334595] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7980e0 is same with the state(5) to be set 00:24:41.426 [2024-07-14 20:24:30.334602] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7980e0 is same with the state(5) to be set 00:24:41.426 [2024-07-14 20:24:30.334609] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7980e0 is same with the state(5) to be set 00:24:41.426 [2024-07-14 20:24:30.334617] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7980e0 is same with the state(5) to be set 00:24:41.426 [2024-07-14 20:24:30.334625] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7980e0 is same with the state(5) to be set 00:24:41.426 [2024-07-14 20:24:30.334632] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7980e0 is same with the state(5) to be set 00:24:41.426 [2024-07-14 20:24:30.334639] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7980e0 is same with the state(5) to be set 00:24:41.426 [2024-07-14 20:24:30.334646] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7980e0 is same with the state(5) to be set 00:24:41.426 [2024-07-14 20:24:30.334654] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7980e0 is same with the state(5) to be set 00:24:41.426 [2024-07-14 20:24:30.334662] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7980e0 is same with the state(5) to be set 00:24:41.426 [2024-07-14 20:24:30.334669] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7980e0 is same with the state(5) to be set 00:24:41.426 [2024-07-14 20:24:30.334676] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7980e0 is same with the state(5) to be set 00:24:41.426 [2024-07-14 20:24:30.334684] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7980e0 is same with the state(5) to be set 00:24:41.426 [2024-07-14 20:24:30.334692] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7980e0 is same with the state(5) to be set 00:24:41.426 [2024-07-14 20:24:30.334699] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7980e0 is same with the state(5) to be set 00:24:41.426 [2024-07-14 20:24:30.334706] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7980e0 is same with the state(5) to be set 00:24:41.426 20:24:30 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 106635 00:24:47.996 0 00:24:47.996 20:24:36 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 106587 00:24:47.996 20:24:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 106587 ']' 00:24:47.996 20:24:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 106587 00:24:47.996 20:24:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:24:47.996 20:24:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:47.996 20:24:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 106587 00:24:47.996 killing process with pid 106587 00:24:47.996 20:24:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:47.996 20:24:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:47.996 20:24:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 106587' 00:24:47.996 20:24:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 106587 00:24:47.996 20:24:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 106587 00:24:47.996 20:24:36 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:47.996 [2024-07-14 20:24:19.441710] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:24:47.996 [2024-07-14 20:24:19.441816] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106587 ] 00:24:47.996 [2024-07-14 20:24:19.573551] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:47.996 [2024-07-14 20:24:19.665093] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:47.996 Running I/O for 15 seconds... 00:24:47.996 [2024-07-14 20:24:22.175145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:90328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.996 [2024-07-14 20:24:22.175223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.996 [2024-07-14 20:24:22.175271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:90336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.996 [2024-07-14 20:24:22.175301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.996 [2024-07-14 20:24:22.175331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:90344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.996 [2024-07-14 20:24:22.175359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.996 [2024-07-14 20:24:22.175372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:90352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.996 [2024-07-14 20:24:22.175385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.996 [2024-07-14 20:24:22.175399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:90360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.996 [2024-07-14 20:24:22.175411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.996 [2024-07-14 20:24:22.175425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:90368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.996 [2024-07-14 20:24:22.175437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.996 [2024-07-14 20:24:22.175450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:90376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.996 [2024-07-14 20:24:22.175463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.996 [2024-07-14 20:24:22.175476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:90384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.996 [2024-07-14 20:24:22.175489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.996 [2024-07-14 20:24:22.175502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:90392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.996 [2024-07-14 20:24:22.175514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.996 [2024-07-14 20:24:22.175527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:90400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.996 [2024-07-14 20:24:22.175540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.996 [2024-07-14 20:24:22.175553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:90408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.996 [2024-07-14 20:24:22.175564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.996 [2024-07-14 20:24:22.175610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:90416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.996 [2024-07-14 20:24:22.175624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.996 [2024-07-14 20:24:22.175637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:90424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.996 [2024-07-14 20:24:22.175650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.996 [2024-07-14 20:24:22.175663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:90432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.996 [2024-07-14 20:24:22.175675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.996 [2024-07-14 20:24:22.175689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:90440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.996 [2024-07-14 20:24:22.175702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.996 [2024-07-14 20:24:22.175716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:90448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.996 [2024-07-14 20:24:22.175728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.996 [2024-07-14 20:24:22.175741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:90456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.996 [2024-07-14 20:24:22.175753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.996 [2024-07-14 20:24:22.175767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:90464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.996 [2024-07-14 20:24:22.175779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.996 [2024-07-14 20:24:22.175793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:90472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.996 [2024-07-14 20:24:22.175805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.996 [2024-07-14 20:24:22.175818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:90480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.996 [2024-07-14 20:24:22.175830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.996 [2024-07-14 20:24:22.175843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:90488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.996 [2024-07-14 20:24:22.175855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.996 [2024-07-14 20:24:22.175885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:90496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.996 [2024-07-14 20:24:22.175914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.996 [2024-07-14 20:24:22.175928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:90504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.996 [2024-07-14 20:24:22.175955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.996 [2024-07-14 20:24:22.175974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:90512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.996 [2024-07-14 20:24:22.175996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.996 [2024-07-14 20:24:22.176012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:90520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.996 [2024-07-14 20:24:22.176026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.996 [2024-07-14 20:24:22.176041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:90528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.996 [2024-07-14 20:24:22.176054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.996 [2024-07-14 20:24:22.176070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:90536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.997 [2024-07-14 20:24:22.176092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.997 [2024-07-14 20:24:22.176107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:90544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.997 [2024-07-14 20:24:22.176120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.997 [2024-07-14 20:24:22.176135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:90552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.997 [2024-07-14 20:24:22.176149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.997 [2024-07-14 20:24:22.176163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:90560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.997 [2024-07-14 20:24:22.176177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.997 [2024-07-14 20:24:22.176192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:90568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.997 [2024-07-14 20:24:22.176206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.997 [2024-07-14 20:24:22.176221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:89896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.997 [2024-07-14 20:24:22.176235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.997 [2024-07-14 20:24:22.176264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:89904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.997 [2024-07-14 20:24:22.176309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.997 [2024-07-14 20:24:22.176322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:89912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.997 [2024-07-14 20:24:22.176334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.997 [2024-07-14 20:24:22.176348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:89920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.997 [2024-07-14 20:24:22.176360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.997 [2024-07-14 20:24:22.176374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:89928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.997 [2024-07-14 20:24:22.176387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.997 [2024-07-14 20:24:22.176400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:89936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.997 [2024-07-14 20:24:22.176418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.997 [2024-07-14 20:24:22.176433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:89944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.997 [2024-07-14 20:24:22.176445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.997 [2024-07-14 20:24:22.176459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:89952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.997 [2024-07-14 20:24:22.176471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.997 [2024-07-14 20:24:22.176485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:89960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.997 [2024-07-14 20:24:22.176497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.997 [2024-07-14 20:24:22.176510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:89968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.997 [2024-07-14 20:24:22.176523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.997 [2024-07-14 20:24:22.176537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:89976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.997 [2024-07-14 20:24:22.176549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.997 [2024-07-14 20:24:22.176562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:89984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.997 [2024-07-14 20:24:22.176575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.997 [2024-07-14 20:24:22.176589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:89992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.997 [2024-07-14 20:24:22.176601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.997 [2024-07-14 20:24:22.176615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:90000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.997 [2024-07-14 20:24:22.176628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.997 [2024-07-14 20:24:22.176641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:90008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.997 [2024-07-14 20:24:22.176654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.997 [2024-07-14 20:24:22.176667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:90016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.997 [2024-07-14 20:24:22.176680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.997 [2024-07-14 20:24:22.176693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:90024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.997 [2024-07-14 20:24:22.176706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.997 [2024-07-14 20:24:22.176720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:90032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.997 [2024-07-14 20:24:22.176733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.997 [2024-07-14 20:24:22.176752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:90040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.997 [2024-07-14 20:24:22.176765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.997 [2024-07-14 20:24:22.176779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:90048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.997 [2024-07-14 20:24:22.176791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.997 [2024-07-14 20:24:22.176805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:90056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.997 [2024-07-14 20:24:22.176817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.997 [2024-07-14 20:24:22.176831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:90064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.997 [2024-07-14 20:24:22.176843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.997 [2024-07-14 20:24:22.176857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:90072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.997 [2024-07-14 20:24:22.176869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.997 [2024-07-14 20:24:22.176883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:90080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.997 [2024-07-14 20:24:22.176895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.997 [2024-07-14 20:24:22.176920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:90088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.997 [2024-07-14 20:24:22.176933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.997 [2024-07-14 20:24:22.176947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:90096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.997 [2024-07-14 20:24:22.176960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.997 [2024-07-14 20:24:22.176973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:90104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.997 [2024-07-14 20:24:22.176985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.997 [2024-07-14 20:24:22.176998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:90112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.997 [2024-07-14 20:24:22.177011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.997 [2024-07-14 20:24:22.177024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:90120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.997 [2024-07-14 20:24:22.177036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.997 [2024-07-14 20:24:22.177049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:90128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.997 [2024-07-14 20:24:22.177062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.997 [2024-07-14 20:24:22.177084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:90136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.997 [2024-07-14 20:24:22.177104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.997 [2024-07-14 20:24:22.177118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:90144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.997 [2024-07-14 20:24:22.177131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.997 [2024-07-14 20:24:22.177144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:90152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.997 [2024-07-14 20:24:22.177156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.997 [2024-07-14 20:24:22.177170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:90160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.997 [2024-07-14 20:24:22.177182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.997 [2024-07-14 20:24:22.177196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:90168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.997 [2024-07-14 20:24:22.177208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.997 [2024-07-14 20:24:22.177221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:90176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.997 [2024-07-14 20:24:22.177233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.997 [2024-07-14 20:24:22.177247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:90184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.997 [2024-07-14 20:24:22.177259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.997 [2024-07-14 20:24:22.177273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:90192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.997 [2024-07-14 20:24:22.177285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.997 [2024-07-14 20:24:22.177298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:90200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.998 [2024-07-14 20:24:22.177311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.998 [2024-07-14 20:24:22.177324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:90208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.998 [2024-07-14 20:24:22.177337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.998 [2024-07-14 20:24:22.177351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:90216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.998 [2024-07-14 20:24:22.177363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.998 [2024-07-14 20:24:22.177376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:90224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.998 [2024-07-14 20:24:22.177389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.998 [2024-07-14 20:24:22.177402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:90232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.998 [2024-07-14 20:24:22.177414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.998 [2024-07-14 20:24:22.177433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:90240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.998 [2024-07-14 20:24:22.177446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.998 [2024-07-14 20:24:22.177459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:90248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.998 [2024-07-14 20:24:22.177471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.998 [2024-07-14 20:24:22.177485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:90256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.998 [2024-07-14 20:24:22.177497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.998 [2024-07-14 20:24:22.177517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:90264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.998 [2024-07-14 20:24:22.177530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.998 [2024-07-14 20:24:22.177543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:90272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.998 [2024-07-14 20:24:22.177556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.998 [2024-07-14 20:24:22.177570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:90280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.998 [2024-07-14 20:24:22.177582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.998 [2024-07-14 20:24:22.177596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:90288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.998 [2024-07-14 20:24:22.177608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.998 [2024-07-14 20:24:22.177621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:90296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.998 [2024-07-14 20:24:22.177633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.998 [2024-07-14 20:24:22.177647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:90304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.998 [2024-07-14 20:24:22.177659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.998 [2024-07-14 20:24:22.177672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:90312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.998 [2024-07-14 20:24:22.177684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.998 [2024-07-14 20:24:22.177697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:90320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.998 [2024-07-14 20:24:22.177709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.998 [2024-07-14 20:24:22.177723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:90576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.998 [2024-07-14 20:24:22.177735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.998 [2024-07-14 20:24:22.177748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:90584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.998 [2024-07-14 20:24:22.177766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.998 [2024-07-14 20:24:22.177781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:90592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.998 [2024-07-14 20:24:22.177793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.998 [2024-07-14 20:24:22.177806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:90600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.998 [2024-07-14 20:24:22.177818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.998 [2024-07-14 20:24:22.177831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:90608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.998 [2024-07-14 20:24:22.177844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.998 [2024-07-14 20:24:22.177868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:90616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.998 [2024-07-14 20:24:22.177882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.998 [2024-07-14 20:24:22.177896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:90624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.998 [2024-07-14 20:24:22.177908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.998 [2024-07-14 20:24:22.177921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:90632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.998 [2024-07-14 20:24:22.177934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.998 [2024-07-14 20:24:22.177953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:90640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.998 [2024-07-14 20:24:22.177965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.998 [2024-07-14 20:24:22.177979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:90648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.998 [2024-07-14 20:24:22.177991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.998 [2024-07-14 20:24:22.178005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:90656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.998 [2024-07-14 20:24:22.178017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.998 [2024-07-14 20:24:22.178030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:90664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.998 [2024-07-14 20:24:22.178042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.998 [2024-07-14 20:24:22.178055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:90672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.998 [2024-07-14 20:24:22.178068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.998 [2024-07-14 20:24:22.178082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:90680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.998 [2024-07-14 20:24:22.178094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.998 [2024-07-14 20:24:22.178108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:90688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.998 [2024-07-14 20:24:22.178127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.998 [2024-07-14 20:24:22.178142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:90696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.998 [2024-07-14 20:24:22.178154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.998 [2024-07-14 20:24:22.178168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:90704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.998 [2024-07-14 20:24:22.178180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.998 [2024-07-14 20:24:22.178193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:90712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.998 [2024-07-14 20:24:22.178205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.998 [2024-07-14 20:24:22.178220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:90720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.998 [2024-07-14 20:24:22.178232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.998 [2024-07-14 20:24:22.178245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:90728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.998 [2024-07-14 20:24:22.178257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.998 [2024-07-14 20:24:22.178271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:90736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.998 [2024-07-14 20:24:22.178283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.998 [2024-07-14 20:24:22.178297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:90744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.998 [2024-07-14 20:24:22.178309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.998 [2024-07-14 20:24:22.178322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:90752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.998 [2024-07-14 20:24:22.178334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.998 [2024-07-14 20:24:22.178347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:90760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.998 [2024-07-14 20:24:22.178360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.998 [2024-07-14 20:24:22.178379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:90768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.998 [2024-07-14 20:24:22.178391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.998 [2024-07-14 20:24:22.178405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:90776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.998 [2024-07-14 20:24:22.178417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.998 [2024-07-14 20:24:22.178431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:90784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.998 [2024-07-14 20:24:22.178443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.998 [2024-07-14 20:24:22.178461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:90792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.999 [2024-07-14 20:24:22.178474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.999 [2024-07-14 20:24:22.178488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:90800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.999 [2024-07-14 20:24:22.178501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.999 [2024-07-14 20:24:22.178514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:90808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.999 [2024-07-14 20:24:22.178527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.999 [2024-07-14 20:24:22.178540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:90816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.999 [2024-07-14 20:24:22.178552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.999 [2024-07-14 20:24:22.178566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:90824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.999 [2024-07-14 20:24:22.178578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.999 [2024-07-14 20:24:22.178591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.999 [2024-07-14 20:24:22.178603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.999 [2024-07-14 20:24:22.178616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:90840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.999 [2024-07-14 20:24:22.178629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.999 [2024-07-14 20:24:22.178643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:90848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.999 [2024-07-14 20:24:22.178655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.999 [2024-07-14 20:24:22.178668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:90856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.999 [2024-07-14 20:24:22.178681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.999 [2024-07-14 20:24:22.178694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:90864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.999 [2024-07-14 20:24:22.178707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.999 [2024-07-14 20:24:22.178721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:90872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.999 [2024-07-14 20:24:22.178733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.999 [2024-07-14 20:24:22.178747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:90880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.999 [2024-07-14 20:24:22.178764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.999 [2024-07-14 20:24:22.178777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:90888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.999 [2024-07-14 20:24:22.178795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.999 [2024-07-14 20:24:22.178814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:90896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.999 [2024-07-14 20:24:22.178827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.999 [2024-07-14 20:24:22.178841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:90904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.999 [2024-07-14 20:24:22.178917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.999 [2024-07-14 20:24:22.178951] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22aeb50 is same with the state(5) to be set 00:24:47.999 [2024-07-14 20:24:22.178968] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.999 [2024-07-14 20:24:22.178978] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.999 [2024-07-14 20:24:22.178989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90912 len:8 PRP1 0x0 PRP2 0x0 00:24:47.999 [2024-07-14 20:24:22.179002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.999 [2024-07-14 20:24:22.179078] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x22aeb50 was disconnected and freed. reset controller. 00:24:47.999 [2024-07-14 20:24:22.179096] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:47.999 [2024-07-14 20:24:22.179152] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.999 [2024-07-14 20:24:22.179173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.999 [2024-07-14 20:24:22.179188] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.999 [2024-07-14 20:24:22.179200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.999 [2024-07-14 20:24:22.179216] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.999 [2024-07-14 20:24:22.179244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.999 [2024-07-14 20:24:22.179257] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.999 [2024-07-14 20:24:22.179270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.999 [2024-07-14 20:24:22.179283] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:47.999 [2024-07-14 20:24:22.179325] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2290200 (9): Bad file descriptor 00:24:47.999 [2024-07-14 20:24:22.182862] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:47.999 [2024-07-14 20:24:22.218109] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:47.999 [2024-07-14 20:24:25.794817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.999 [2024-07-14 20:24:25.794950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.999 [2024-07-14 20:24:25.794991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.999 [2024-07-14 20:24:25.795037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.999 [2024-07-14 20:24:25.795054] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.999 [2024-07-14 20:24:25.795067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.999 [2024-07-14 20:24:25.795081] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.999 [2024-07-14 20:24:25.795094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.999 [2024-07-14 20:24:25.795107] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2290200 is same with the state(5) to be set 00:24:47.999 [2024-07-14 20:24:25.798558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:126720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.999 [2024-07-14 20:24:25.798590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.999 [2024-07-14 20:24:25.798629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:125704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.999 [2024-07-14 20:24:25.798642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.999 [2024-07-14 20:24:25.798657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:125712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.999 [2024-07-14 20:24:25.798670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.999 [2024-07-14 20:24:25.798683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:125720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.999 [2024-07-14 20:24:25.798696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.999 [2024-07-14 20:24:25.798709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:125728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.999 [2024-07-14 20:24:25.798720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.999 [2024-07-14 20:24:25.798733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:125736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.999 [2024-07-14 20:24:25.798746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.999 [2024-07-14 20:24:25.798759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:125744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.999 [2024-07-14 20:24:25.798771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.999 [2024-07-14 20:24:25.798784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:125752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.999 [2024-07-14 20:24:25.798796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.999 [2024-07-14 20:24:25.798809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:125760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.999 [2024-07-14 20:24:25.798821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.999 [2024-07-14 20:24:25.798834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:125768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.999 [2024-07-14 20:24:25.798847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.999 [2024-07-14 20:24:25.798882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:125776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.999 [2024-07-14 20:24:25.798923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.999 [2024-07-14 20:24:25.798939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:125784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.999 [2024-07-14 20:24:25.798952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.999 [2024-07-14 20:24:25.798966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:125792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.999 [2024-07-14 20:24:25.798979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.999 [2024-07-14 20:24:25.798993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:125800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.999 [2024-07-14 20:24:25.799006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.999 [2024-07-14 20:24:25.799020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:125808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.999 [2024-07-14 20:24:25.799032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.999 [2024-07-14 20:24:25.799047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:125816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.000 [2024-07-14 20:24:25.799059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.000 [2024-07-14 20:24:25.799073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:125824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.000 [2024-07-14 20:24:25.799086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.000 [2024-07-14 20:24:25.799100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:125832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.000 [2024-07-14 20:24:25.799112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.000 [2024-07-14 20:24:25.799126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:125840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.000 [2024-07-14 20:24:25.799141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.000 [2024-07-14 20:24:25.799156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:125848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.000 [2024-07-14 20:24:25.799169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.000 [2024-07-14 20:24:25.799183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:125856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.000 [2024-07-14 20:24:25.799196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.000 [2024-07-14 20:24:25.799209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:125864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.000 [2024-07-14 20:24:25.799237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.000 [2024-07-14 20:24:25.799252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:125872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.000 [2024-07-14 20:24:25.799274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.000 [2024-07-14 20:24:25.799288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:125880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.000 [2024-07-14 20:24:25.799301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.000 [2024-07-14 20:24:25.799330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:125888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.000 [2024-07-14 20:24:25.799342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.000 [2024-07-14 20:24:25.799356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:125896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.000 [2024-07-14 20:24:25.799368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.000 [2024-07-14 20:24:25.799381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:125904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.000 [2024-07-14 20:24:25.799393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.000 [2024-07-14 20:24:25.799406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:125912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.000 [2024-07-14 20:24:25.799418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.000 [2024-07-14 20:24:25.799431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:125920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.000 [2024-07-14 20:24:25.799444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.000 [2024-07-14 20:24:25.799457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:125928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.000 [2024-07-14 20:24:25.799470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.000 [2024-07-14 20:24:25.799484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:125936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.000 [2024-07-14 20:24:25.799496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.000 [2024-07-14 20:24:25.799509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:125944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.000 [2024-07-14 20:24:25.799522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.000 [2024-07-14 20:24:25.799535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:125952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.000 [2024-07-14 20:24:25.799547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.000 [2024-07-14 20:24:25.799560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:125960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.000 [2024-07-14 20:24:25.799573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.000 [2024-07-14 20:24:25.799587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:125968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.000 [2024-07-14 20:24:25.799601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.000 [2024-07-14 20:24:25.799620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:125976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.000 [2024-07-14 20:24:25.799634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.000 [2024-07-14 20:24:25.799648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:125984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.000 [2024-07-14 20:24:25.799660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.000 [2024-07-14 20:24:25.799674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:125992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.000 [2024-07-14 20:24:25.799687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.000 [2024-07-14 20:24:25.799700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:126000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.000 [2024-07-14 20:24:25.799712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.000 [2024-07-14 20:24:25.799726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:126008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.000 [2024-07-14 20:24:25.799738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.000 [2024-07-14 20:24:25.799752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:126016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.000 [2024-07-14 20:24:25.799764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.000 [2024-07-14 20:24:25.799777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:126024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.000 [2024-07-14 20:24:25.799789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.000 [2024-07-14 20:24:25.799803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:126032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.000 [2024-07-14 20:24:25.799814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.000 [2024-07-14 20:24:25.799828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:126040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.000 [2024-07-14 20:24:25.799840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.000 [2024-07-14 20:24:25.799853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:126048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.000 [2024-07-14 20:24:25.799866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.000 [2024-07-14 20:24:25.799879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:126056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.000 [2024-07-14 20:24:25.799891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.000 [2024-07-14 20:24:25.799914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:126064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.000 [2024-07-14 20:24:25.799928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.000 [2024-07-14 20:24:25.799941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:126072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.000 [2024-07-14 20:24:25.799961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.000 [2024-07-14 20:24:25.799975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:126080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.000 [2024-07-14 20:24:25.799987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.000 [2024-07-14 20:24:25.800000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:126088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.000 [2024-07-14 20:24:25.800013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.000 [2024-07-14 20:24:25.800027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:126096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.001 [2024-07-14 20:24:25.800039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.001 [2024-07-14 20:24:25.800053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:126104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.001 [2024-07-14 20:24:25.800065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.001 [2024-07-14 20:24:25.800079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:126112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.001 [2024-07-14 20:24:25.800092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.001 [2024-07-14 20:24:25.800105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:126120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.001 [2024-07-14 20:24:25.800117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.001 [2024-07-14 20:24:25.800131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:126128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.001 [2024-07-14 20:24:25.800144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.001 [2024-07-14 20:24:25.800157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:126136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.001 [2024-07-14 20:24:25.800169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.001 [2024-07-14 20:24:25.800183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:126144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.001 [2024-07-14 20:24:25.800195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.001 [2024-07-14 20:24:25.800208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:126152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.001 [2024-07-14 20:24:25.800220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.001 [2024-07-14 20:24:25.800234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:126160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.001 [2024-07-14 20:24:25.800246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.001 [2024-07-14 20:24:25.800259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:126168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.001 [2024-07-14 20:24:25.800271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.001 [2024-07-14 20:24:25.800291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:126176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.001 [2024-07-14 20:24:25.800304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.001 [2024-07-14 20:24:25.800317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:126184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.001 [2024-07-14 20:24:25.800329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.001 [2024-07-14 20:24:25.800342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:126192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.001 [2024-07-14 20:24:25.800355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.001 [2024-07-14 20:24:25.800368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:126200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.001 [2024-07-14 20:24:25.800381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.001 [2024-07-14 20:24:25.800394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:126208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.001 [2024-07-14 20:24:25.800406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.001 [2024-07-14 20:24:25.800420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:126216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.001 [2024-07-14 20:24:25.800433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.001 [2024-07-14 20:24:25.800447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:126224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.001 [2024-07-14 20:24:25.800460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.001 [2024-07-14 20:24:25.800473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:126232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.001 [2024-07-14 20:24:25.800485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.001 [2024-07-14 20:24:25.800498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:126240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.001 [2024-07-14 20:24:25.800511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.001 [2024-07-14 20:24:25.800524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:126248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.001 [2024-07-14 20:24:25.800536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.001 [2024-07-14 20:24:25.800549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:126256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.001 [2024-07-14 20:24:25.800561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.001 [2024-07-14 20:24:25.800575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:126264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.001 [2024-07-14 20:24:25.800587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.001 [2024-07-14 20:24:25.800600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:126272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.001 [2024-07-14 20:24:25.800618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.001 [2024-07-14 20:24:25.800633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:126280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.001 [2024-07-14 20:24:25.800645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.001 [2024-07-14 20:24:25.800659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:126288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.001 [2024-07-14 20:24:25.800670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.001 [2024-07-14 20:24:25.800684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:126296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.001 [2024-07-14 20:24:25.800696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.001 [2024-07-14 20:24:25.800709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:126304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.001 [2024-07-14 20:24:25.800721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.001 [2024-07-14 20:24:25.800735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:126312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.001 [2024-07-14 20:24:25.800747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.001 [2024-07-14 20:24:25.800761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:126320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.001 [2024-07-14 20:24:25.800773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.001 [2024-07-14 20:24:25.800787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:126328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.001 [2024-07-14 20:24:25.800799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.001 [2024-07-14 20:24:25.800812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:126336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.001 [2024-07-14 20:24:25.800825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.001 [2024-07-14 20:24:25.800838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:126344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.001 [2024-07-14 20:24:25.800851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.001 [2024-07-14 20:24:25.800892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:126352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.001 [2024-07-14 20:24:25.800905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.001 [2024-07-14 20:24:25.800919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:126360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.001 [2024-07-14 20:24:25.800931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.001 [2024-07-14 20:24:25.800945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:126368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.001 [2024-07-14 20:24:25.800970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.001 [2024-07-14 20:24:25.800997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:126376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.001 [2024-07-14 20:24:25.801010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.001 [2024-07-14 20:24:25.801025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:126384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.001 [2024-07-14 20:24:25.801037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.001 [2024-07-14 20:24:25.801051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:126392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.001 [2024-07-14 20:24:25.801063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.001 [2024-07-14 20:24:25.801077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:126400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.001 [2024-07-14 20:24:25.801090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.001 [2024-07-14 20:24:25.801105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:126408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.001 [2024-07-14 20:24:25.801117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.001 [2024-07-14 20:24:25.801131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:126416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.001 [2024-07-14 20:24:25.801144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.001 [2024-07-14 20:24:25.801158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:126424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.001 [2024-07-14 20:24:25.801170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.001 [2024-07-14 20:24:25.801184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:126432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.001 [2024-07-14 20:24:25.801196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.002 [2024-07-14 20:24:25.801210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:126440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.002 [2024-07-14 20:24:25.801222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.002 [2024-07-14 20:24:25.801236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:126448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.002 [2024-07-14 20:24:25.801271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.002 [2024-07-14 20:24:25.801285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:126456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.002 [2024-07-14 20:24:25.801297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.002 [2024-07-14 20:24:25.801310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:126464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.002 [2024-07-14 20:24:25.801322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.002 [2024-07-14 20:24:25.801336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:126472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.002 [2024-07-14 20:24:25.801353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.002 [2024-07-14 20:24:25.801368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:126480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.002 [2024-07-14 20:24:25.801380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.002 [2024-07-14 20:24:25.801393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:126488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.002 [2024-07-14 20:24:25.801406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.002 [2024-07-14 20:24:25.801418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:126496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.002 [2024-07-14 20:24:25.801431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.002 [2024-07-14 20:24:25.801444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:126504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.002 [2024-07-14 20:24:25.801456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.002 [2024-07-14 20:24:25.801469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:126512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.002 [2024-07-14 20:24:25.801481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.002 [2024-07-14 20:24:25.801494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:126520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.002 [2024-07-14 20:24:25.801507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.002 [2024-07-14 20:24:25.801520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:126528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.002 [2024-07-14 20:24:25.801533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.002 [2024-07-14 20:24:25.801546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:126536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.002 [2024-07-14 20:24:25.801558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.002 [2024-07-14 20:24:25.801587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:126544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.002 [2024-07-14 20:24:25.801600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.002 [2024-07-14 20:24:25.801614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:126552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.002 [2024-07-14 20:24:25.801627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.002 [2024-07-14 20:24:25.801641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:126560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.002 [2024-07-14 20:24:25.801653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.002 [2024-07-14 20:24:25.801667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:126568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.002 [2024-07-14 20:24:25.801680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.002 [2024-07-14 20:24:25.801693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:126576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.002 [2024-07-14 20:24:25.801713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.002 [2024-07-14 20:24:25.801728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:126584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.002 [2024-07-14 20:24:25.801740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.002 [2024-07-14 20:24:25.801769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:126592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.002 [2024-07-14 20:24:25.801782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.002 [2024-07-14 20:24:25.801796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:126600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.002 [2024-07-14 20:24:25.801809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.002 [2024-07-14 20:24:25.801822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:126608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.002 [2024-07-14 20:24:25.801835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.002 [2024-07-14 20:24:25.801849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:126616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.002 [2024-07-14 20:24:25.801862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.002 [2024-07-14 20:24:25.801876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:126624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.002 [2024-07-14 20:24:25.801889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.002 [2024-07-14 20:24:25.801902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:126632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.002 [2024-07-14 20:24:25.801927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.002 [2024-07-14 20:24:25.801943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:126640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.002 [2024-07-14 20:24:25.801956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.002 [2024-07-14 20:24:25.801970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:126648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.002 [2024-07-14 20:24:25.801997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.002 [2024-07-14 20:24:25.802011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:126656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.002 [2024-07-14 20:24:25.802024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.002 [2024-07-14 20:24:25.802037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:126664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.002 [2024-07-14 20:24:25.802049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.002 [2024-07-14 20:24:25.802063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:126672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.002 [2024-07-14 20:24:25.802075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.002 [2024-07-14 20:24:25.802096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:126680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.002 [2024-07-14 20:24:25.802109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.002 [2024-07-14 20:24:25.802124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:126688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.002 [2024-07-14 20:24:25.802136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.002 [2024-07-14 20:24:25.802149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:126696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.002 [2024-07-14 20:24:25.802162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.002 [2024-07-14 20:24:25.802176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:126704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.002 [2024-07-14 20:24:25.802190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.002 [2024-07-14 20:24:25.802203] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bc400 is same with the state(5) to be set 00:24:48.002 [2024-07-14 20:24:25.802218] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.002 [2024-07-14 20:24:25.802228] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.002 [2024-07-14 20:24:25.802238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:126712 len:8 PRP1 0x0 PRP2 0x0 00:24:48.002 [2024-07-14 20:24:25.802249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.002 [2024-07-14 20:24:25.802316] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x22bc400 was disconnected and freed. reset controller. 00:24:48.002 [2024-07-14 20:24:25.802334] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:24:48.002 [2024-07-14 20:24:25.802347] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:48.002 [2024-07-14 20:24:25.805957] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:48.002 [2024-07-14 20:24:25.805994] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2290200 (9): Bad file descriptor 00:24:48.002 [2024-07-14 20:24:25.841902] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:48.002 [2024-07-14 20:24:30.335782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:107408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.002 [2024-07-14 20:24:30.335830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.002 [2024-07-14 20:24:30.335855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:107416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.002 [2024-07-14 20:24:30.335902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.002 [2024-07-14 20:24:30.335918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:107424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.002 [2024-07-14 20:24:30.335947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.002 [2024-07-14 20:24:30.335965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:107432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.002 [2024-07-14 20:24:30.335979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.002 [2024-07-14 20:24:30.336103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:107440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.003 [2024-07-14 20:24:30.336118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.003 [2024-07-14 20:24:30.336133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:107448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.003 [2024-07-14 20:24:30.336146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.003 [2024-07-14 20:24:30.336160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:107456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.003 [2024-07-14 20:24:30.336172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.003 [2024-07-14 20:24:30.336186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:107464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.003 [2024-07-14 20:24:30.336199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.003 [2024-07-14 20:24:30.336236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:107472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.003 [2024-07-14 20:24:30.336248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.003 [2024-07-14 20:24:30.336276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:107480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.003 [2024-07-14 20:24:30.336288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.003 [2024-07-14 20:24:30.336301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:107488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.003 [2024-07-14 20:24:30.336313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.003 [2024-07-14 20:24:30.336326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:107496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.003 [2024-07-14 20:24:30.336338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.003 [2024-07-14 20:24:30.336350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:107504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.003 [2024-07-14 20:24:30.336362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.003 [2024-07-14 20:24:30.336375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:107512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.003 [2024-07-14 20:24:30.336387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.003 [2024-07-14 20:24:30.336401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:107520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.003 [2024-07-14 20:24:30.336413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.003 [2024-07-14 20:24:30.336426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:107528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.003 [2024-07-14 20:24:30.336438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.003 [2024-07-14 20:24:30.336451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:107536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.003 [2024-07-14 20:24:30.336473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.003 [2024-07-14 20:24:30.336487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:107544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.003 [2024-07-14 20:24:30.336499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.003 [2024-07-14 20:24:30.336513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:107552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.003 [2024-07-14 20:24:30.336525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.003 [2024-07-14 20:24:30.336539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:107560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.003 [2024-07-14 20:24:30.336550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.003 [2024-07-14 20:24:30.336564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:107568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.003 [2024-07-14 20:24:30.336582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.003 [2024-07-14 20:24:30.336595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:107576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.003 [2024-07-14 20:24:30.336607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.003 [2024-07-14 20:24:30.336620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:107584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.003 [2024-07-14 20:24:30.336632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.003 [2024-07-14 20:24:30.336645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:107592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.003 [2024-07-14 20:24:30.336657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.003 [2024-07-14 20:24:30.336671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:107600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.003 [2024-07-14 20:24:30.336683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.003 [2024-07-14 20:24:30.336696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:107608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.003 [2024-07-14 20:24:30.336708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.003 [2024-07-14 20:24:30.336721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:107616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.003 [2024-07-14 20:24:30.336734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.003 [2024-07-14 20:24:30.336746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:107624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.003 [2024-07-14 20:24:30.336758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.003 [2024-07-14 20:24:30.336773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:107632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.003 [2024-07-14 20:24:30.336786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.003 [2024-07-14 20:24:30.336805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:107640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.003 [2024-07-14 20:24:30.336819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.003 [2024-07-14 20:24:30.336833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:107648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.003 [2024-07-14 20:24:30.336845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.003 [2024-07-14 20:24:30.336858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:107656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.003 [2024-07-14 20:24:30.336871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.003 [2024-07-14 20:24:30.336894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:107664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.003 [2024-07-14 20:24:30.336910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.003 [2024-07-14 20:24:30.336923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:107672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.003 [2024-07-14 20:24:30.336936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.003 [2024-07-14 20:24:30.336949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:107680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.003 [2024-07-14 20:24:30.336961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.003 [2024-07-14 20:24:30.336974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:107688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.003 [2024-07-14 20:24:30.336986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.003 [2024-07-14 20:24:30.336999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:107696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.003 [2024-07-14 20:24:30.337012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.003 [2024-07-14 20:24:30.337025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:107704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.003 [2024-07-14 20:24:30.337037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.003 [2024-07-14 20:24:30.337050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:107712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.003 [2024-07-14 20:24:30.337062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.003 [2024-07-14 20:24:30.337075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:107728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.003 [2024-07-14 20:24:30.337087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.003 [2024-07-14 20:24:30.337101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:107736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.003 [2024-07-14 20:24:30.337113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.003 [2024-07-14 20:24:30.337125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:107744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.003 [2024-07-14 20:24:30.337145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.003 [2024-07-14 20:24:30.337160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:107752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.003 [2024-07-14 20:24:30.337173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.003 [2024-07-14 20:24:30.337196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:107760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.003 [2024-07-14 20:24:30.337229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.003 [2024-07-14 20:24:30.337243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:107768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.003 [2024-07-14 20:24:30.337255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.003 [2024-07-14 20:24:30.337268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.003 [2024-07-14 20:24:30.337280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.004 [2024-07-14 20:24:30.337298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:107784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.004 [2024-07-14 20:24:30.337310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.004 [2024-07-14 20:24:30.337323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:107792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.004 [2024-07-14 20:24:30.337336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.004 [2024-07-14 20:24:30.337349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:107800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.004 [2024-07-14 20:24:30.337361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.004 [2024-07-14 20:24:30.337374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:107808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.004 [2024-07-14 20:24:30.337386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.004 [2024-07-14 20:24:30.337399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:107816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.004 [2024-07-14 20:24:30.337411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.004 [2024-07-14 20:24:30.337424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:107824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.004 [2024-07-14 20:24:30.337436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.004 [2024-07-14 20:24:30.337465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:107832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.004 [2024-07-14 20:24:30.337478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.004 [2024-07-14 20:24:30.337491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:107840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.004 [2024-07-14 20:24:30.337504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.004 [2024-07-14 20:24:30.337523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:107848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.004 [2024-07-14 20:24:30.337537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.004 [2024-07-14 20:24:30.337551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:107856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.004 [2024-07-14 20:24:30.337564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.004 [2024-07-14 20:24:30.337586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:107864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.004 [2024-07-14 20:24:30.337598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.004 [2024-07-14 20:24:30.337611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:107872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.004 [2024-07-14 20:24:30.337624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.004 [2024-07-14 20:24:30.337638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:107880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.004 [2024-07-14 20:24:30.337651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.004 [2024-07-14 20:24:30.337664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:107888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.004 [2024-07-14 20:24:30.337694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.004 [2024-07-14 20:24:30.337708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:107896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.004 [2024-07-14 20:24:30.337721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.004 [2024-07-14 20:24:30.337734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:107904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.004 [2024-07-14 20:24:30.337747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.004 [2024-07-14 20:24:30.337760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:107912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.004 [2024-07-14 20:24:30.337773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.004 [2024-07-14 20:24:30.337786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:107920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.004 [2024-07-14 20:24:30.337799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.004 [2024-07-14 20:24:30.337813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:107928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.004 [2024-07-14 20:24:30.337825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.004 [2024-07-14 20:24:30.337839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:107936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.004 [2024-07-14 20:24:30.337852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.004 [2024-07-14 20:24:30.337865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:107944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.004 [2024-07-14 20:24:30.337902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.004 [2024-07-14 20:24:30.337918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:107952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.004 [2024-07-14 20:24:30.337931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.004 [2024-07-14 20:24:30.337944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:107960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.004 [2024-07-14 20:24:30.337957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.004 [2024-07-14 20:24:30.337971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:107968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.004 [2024-07-14 20:24:30.337984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.004 [2024-07-14 20:24:30.337997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:107976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.004 [2024-07-14 20:24:30.338011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.004 [2024-07-14 20:24:30.338024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:107984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.004 [2024-07-14 20:24:30.338037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.004 [2024-07-14 20:24:30.338050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:107992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.004 [2024-07-14 20:24:30.338062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.004 [2024-07-14 20:24:30.338076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:108000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.004 [2024-07-14 20:24:30.338089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.004 [2024-07-14 20:24:30.338102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:108008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.004 [2024-07-14 20:24:30.338115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.004 [2024-07-14 20:24:30.338128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:108016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.004 [2024-07-14 20:24:30.338147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.004 [2024-07-14 20:24:30.338161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:108024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.004 [2024-07-14 20:24:30.338173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.004 [2024-07-14 20:24:30.338187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:108032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.004 [2024-07-14 20:24:30.338199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.004 [2024-07-14 20:24:30.338213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:108040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.004 [2024-07-14 20:24:30.338225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.004 [2024-07-14 20:24:30.338239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:108048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.004 [2024-07-14 20:24:30.338257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.004 [2024-07-14 20:24:30.338272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:108056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.004 [2024-07-14 20:24:30.338286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.004 [2024-07-14 20:24:30.338300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:108064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.004 [2024-07-14 20:24:30.338313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.004 [2024-07-14 20:24:30.338327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:108072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.005 [2024-07-14 20:24:30.338339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.005 [2024-07-14 20:24:30.338353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:108080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.005 [2024-07-14 20:24:30.338365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.005 [2024-07-14 20:24:30.338379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:108088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.005 [2024-07-14 20:24:30.338391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.005 [2024-07-14 20:24:30.338405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:108096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.005 [2024-07-14 20:24:30.338417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.005 [2024-07-14 20:24:30.338431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:108104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.005 [2024-07-14 20:24:30.338444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.005 [2024-07-14 20:24:30.338457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:108112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.005 [2024-07-14 20:24:30.338469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.005 [2024-07-14 20:24:30.338483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:108120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.005 [2024-07-14 20:24:30.338495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.005 [2024-07-14 20:24:30.338509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:108128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.005 [2024-07-14 20:24:30.338521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.005 [2024-07-14 20:24:30.338535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:108136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.005 [2024-07-14 20:24:30.338547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.005 [2024-07-14 20:24:30.338561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:108144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.005 [2024-07-14 20:24:30.338580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.005 [2024-07-14 20:24:30.338613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:108152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.005 [2024-07-14 20:24:30.338627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.005 [2024-07-14 20:24:30.338641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:108160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.005 [2024-07-14 20:24:30.338653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.005 [2024-07-14 20:24:30.338668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:108168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.005 [2024-07-14 20:24:30.338680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.005 [2024-07-14 20:24:30.338694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:108176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.005 [2024-07-14 20:24:30.338707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.005 [2024-07-14 20:24:30.338720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:108184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.005 [2024-07-14 20:24:30.338733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.005 [2024-07-14 20:24:30.338746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:108192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.005 [2024-07-14 20:24:30.338759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.005 [2024-07-14 20:24:30.338772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:108200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.005 [2024-07-14 20:24:30.338784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.005 [2024-07-14 20:24:30.338798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:108208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.005 [2024-07-14 20:24:30.338811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.005 [2024-07-14 20:24:30.338824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:108216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.005 [2024-07-14 20:24:30.338836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.005 [2024-07-14 20:24:30.338850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:108224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.005 [2024-07-14 20:24:30.338915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.005 [2024-07-14 20:24:30.338932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:108232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.005 [2024-07-14 20:24:30.338946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.005 [2024-07-14 20:24:30.338961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:108240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.005 [2024-07-14 20:24:30.338974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.005 [2024-07-14 20:24:30.339011] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.005 [2024-07-14 20:24:30.339035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108248 len:8 PRP1 0x0 PRP2 0x0 00:24:48.005 [2024-07-14 20:24:30.339049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.005 [2024-07-14 20:24:30.339066] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.005 [2024-07-14 20:24:30.339077] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.005 [2024-07-14 20:24:30.339087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108256 len:8 PRP1 0x0 PRP2 0x0 00:24:48.005 [2024-07-14 20:24:30.339099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.005 [2024-07-14 20:24:30.339119] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.005 [2024-07-14 20:24:30.339129] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.005 [2024-07-14 20:24:30.339139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108264 len:8 PRP1 0x0 PRP2 0x0 00:24:48.005 [2024-07-14 20:24:30.339152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.005 [2024-07-14 20:24:30.339164] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.005 [2024-07-14 20:24:30.339174] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.005 [2024-07-14 20:24:30.339183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108272 len:8 PRP1 0x0 PRP2 0x0 00:24:48.005 [2024-07-14 20:24:30.339196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.005 [2024-07-14 20:24:30.339208] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.005 [2024-07-14 20:24:30.339218] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.005 [2024-07-14 20:24:30.339227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108280 len:8 PRP1 0x0 PRP2 0x0 00:24:48.005 [2024-07-14 20:24:30.339240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.005 [2024-07-14 20:24:30.339252] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.005 [2024-07-14 20:24:30.339261] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.005 [2024-07-14 20:24:30.339286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108288 len:8 PRP1 0x0 PRP2 0x0 00:24:48.005 [2024-07-14 20:24:30.339298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.005 [2024-07-14 20:24:30.339310] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.005 [2024-07-14 20:24:30.339334] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.005 [2024-07-14 20:24:30.339344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108296 len:8 PRP1 0x0 PRP2 0x0 00:24:48.005 [2024-07-14 20:24:30.339356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.005 [2024-07-14 20:24:30.339368] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.005 [2024-07-14 20:24:30.339377] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.005 [2024-07-14 20:24:30.339386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108304 len:8 PRP1 0x0 PRP2 0x0 00:24:48.005 [2024-07-14 20:24:30.339398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.005 [2024-07-14 20:24:30.339410] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.005 [2024-07-14 20:24:30.339426] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.005 [2024-07-14 20:24:30.339436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108312 len:8 PRP1 0x0 PRP2 0x0 00:24:48.005 [2024-07-14 20:24:30.339448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.005 [2024-07-14 20:24:30.339460] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.005 [2024-07-14 20:24:30.339469] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.005 [2024-07-14 20:24:30.339479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108320 len:8 PRP1 0x0 PRP2 0x0 00:24:48.005 [2024-07-14 20:24:30.339491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.005 [2024-07-14 20:24:30.339509] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.005 [2024-07-14 20:24:30.339520] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.005 [2024-07-14 20:24:30.339536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108328 len:8 PRP1 0x0 PRP2 0x0 00:24:48.005 [2024-07-14 20:24:30.339547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.005 [2024-07-14 20:24:30.339559] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.005 [2024-07-14 20:24:30.339569] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.005 [2024-07-14 20:24:30.339578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108336 len:8 PRP1 0x0 PRP2 0x0 00:24:48.005 [2024-07-14 20:24:30.339589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.005 [2024-07-14 20:24:30.339601] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.005 [2024-07-14 20:24:30.339610] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.005 [2024-07-14 20:24:30.339619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108344 len:8 PRP1 0x0 PRP2 0x0 00:24:48.005 [2024-07-14 20:24:30.339630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.006 [2024-07-14 20:24:30.339642] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.006 [2024-07-14 20:24:30.339652] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.006 [2024-07-14 20:24:30.339661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108352 len:8 PRP1 0x0 PRP2 0x0 00:24:48.006 [2024-07-14 20:24:30.339673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.006 [2024-07-14 20:24:30.339685] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.006 [2024-07-14 20:24:30.339694] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.006 [2024-07-14 20:24:30.339703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108360 len:8 PRP1 0x0 PRP2 0x0 00:24:48.006 [2024-07-14 20:24:30.339715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.006 [2024-07-14 20:24:30.339727] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.006 [2024-07-14 20:24:30.339736] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.006 [2024-07-14 20:24:30.339745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108368 len:8 PRP1 0x0 PRP2 0x0 00:24:48.006 [2024-07-14 20:24:30.339758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.006 [2024-07-14 20:24:30.339776] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.006 [2024-07-14 20:24:30.339785] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.006 [2024-07-14 20:24:30.339795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108376 len:8 PRP1 0x0 PRP2 0x0 00:24:48.006 [2024-07-14 20:24:30.339806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.006 [2024-07-14 20:24:30.339818] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.006 [2024-07-14 20:24:30.339828] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.006 [2024-07-14 20:24:30.339837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108384 len:8 PRP1 0x0 PRP2 0x0 00:24:48.006 [2024-07-14 20:24:30.339848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.006 [2024-07-14 20:24:30.349086] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.006 [2024-07-14 20:24:30.349115] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.006 [2024-07-14 20:24:30.349128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108392 len:8 PRP1 0x0 PRP2 0x0 00:24:48.006 [2024-07-14 20:24:30.349142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.006 [2024-07-14 20:24:30.349155] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.006 [2024-07-14 20:24:30.349164] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.006 [2024-07-14 20:24:30.349173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108400 len:8 PRP1 0x0 PRP2 0x0 00:24:48.006 [2024-07-14 20:24:30.349184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.006 [2024-07-14 20:24:30.349196] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.006 [2024-07-14 20:24:30.349205] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.006 [2024-07-14 20:24:30.349214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108408 len:8 PRP1 0x0 PRP2 0x0 00:24:48.006 [2024-07-14 20:24:30.349225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.006 [2024-07-14 20:24:30.349237] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.006 [2024-07-14 20:24:30.349245] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.006 [2024-07-14 20:24:30.349254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108416 len:8 PRP1 0x0 PRP2 0x0 00:24:48.006 [2024-07-14 20:24:30.349265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.006 [2024-07-14 20:24:30.349277] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.006 [2024-07-14 20:24:30.349285] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.006 [2024-07-14 20:24:30.349304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108424 len:8 PRP1 0x0 PRP2 0x0 00:24:48.006 [2024-07-14 20:24:30.349315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.006 [2024-07-14 20:24:30.349326] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.006 [2024-07-14 20:24:30.349335] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.006 [2024-07-14 20:24:30.349344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:107720 len:8 PRP1 0x0 PRP2 0x0 00:24:48.006 [2024-07-14 20:24:30.349368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.006 [2024-07-14 20:24:30.349439] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x22bc780 was disconnected and freed. reset controller. 00:24:48.006 [2024-07-14 20:24:30.349457] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:24:48.006 [2024-07-14 20:24:30.349519] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:48.006 [2024-07-14 20:24:30.349541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.006 [2024-07-14 20:24:30.349556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:48.006 [2024-07-14 20:24:30.349568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.006 [2024-07-14 20:24:30.349581] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:48.006 [2024-07-14 20:24:30.349593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.006 [2024-07-14 20:24:30.349605] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:48.006 [2024-07-14 20:24:30.349617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.006 [2024-07-14 20:24:30.349630] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:48.006 [2024-07-14 20:24:30.349682] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2290200 (9): Bad file descriptor 00:24:48.006 [2024-07-14 20:24:30.354203] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:48.006 [2024-07-14 20:24:30.391618] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:48.006 00:24:48.006 Latency(us) 00:24:48.006 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:48.006 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:48.006 Verification LBA range: start 0x0 length 0x4000 00:24:48.006 NVMe0n1 : 15.05 10014.85 39.12 237.41 0.00 12427.15 517.59 46709.29 00:24:48.006 =================================================================================================================== 00:24:48.006 Total : 10014.85 39.12 237.41 0.00 12427.15 517.59 46709.29 00:24:48.006 Received shutdown signal, test time was about 15.000000 seconds 00:24:48.006 00:24:48.006 Latency(us) 00:24:48.006 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:48.006 =================================================================================================================== 00:24:48.006 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:48.006 20:24:36 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:24:48.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:48.006 20:24:36 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:24:48.006 20:24:36 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:24:48.006 20:24:36 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=106837 00:24:48.006 20:24:36 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:24:48.006 20:24:36 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 106837 /var/tmp/bdevperf.sock 00:24:48.006 20:24:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 106837 ']' 00:24:48.006 20:24:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:48.006 20:24:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:48.006 20:24:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:48.006 20:24:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:48.006 20:24:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:48.006 20:24:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:48.006 20:24:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:24:48.006 20:24:36 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:48.006 [2024-07-14 20:24:37.054607] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:48.264 20:24:37 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:48.264 [2024-07-14 20:24:37.278639] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:48.264 20:24:37 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:48.832 NVMe0n1 00:24:48.832 20:24:37 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:49.090 00:24:49.090 20:24:37 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:49.347 00:24:49.347 20:24:38 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:49.347 20:24:38 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:24:49.605 20:24:38 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:49.863 20:24:38 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:24:53.146 20:24:41 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:53.146 20:24:41 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:24:53.146 20:24:42 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:53.146 20:24:42 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=106962 00:24:53.146 20:24:42 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 106962 00:24:54.083 0 00:24:54.083 20:24:43 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:54.083 [2024-07-14 20:24:36.499999] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:24:54.083 [2024-07-14 20:24:36.500227] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106837 ] 00:24:54.083 [2024-07-14 20:24:36.633569] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:54.083 [2024-07-14 20:24:36.706307] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:54.083 [2024-07-14 20:24:38.723813] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:54.083 [2024-07-14 20:24:38.723961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:54.083 [2024-07-14 20:24:38.723987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.083 [2024-07-14 20:24:38.724005] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:54.083 [2024-07-14 20:24:38.724020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.083 [2024-07-14 20:24:38.724034] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:54.083 [2024-07-14 20:24:38.724048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.083 [2024-07-14 20:24:38.724063] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:54.083 [2024-07-14 20:24:38.724076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.083 [2024-07-14 20:24:38.724091] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:54.083 [2024-07-14 20:24:38.724141] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:54.083 [2024-07-14 20:24:38.724195] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x996200 (9): Bad file descriptor 00:24:54.084 [2024-07-14 20:24:38.727472] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:54.084 Running I/O for 1 seconds... 00:24:54.084 00:24:54.084 Latency(us) 00:24:54.084 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:54.084 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:54.084 Verification LBA range: start 0x0 length 0x4000 00:24:54.084 NVMe0n1 : 1.01 10315.75 40.30 0.00 0.00 12341.98 1735.21 15252.01 00:24:54.084 =================================================================================================================== 00:24:54.084 Total : 10315.75 40.30 0.00 0.00 12341.98 1735.21 15252.01 00:24:54.084 20:24:43 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:54.084 20:24:43 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:24:54.343 20:24:43 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:54.602 20:24:43 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:54.602 20:24:43 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:24:54.860 20:24:43 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:55.119 20:24:44 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:24:58.427 20:24:47 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:58.427 20:24:47 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:24:58.427 20:24:47 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 106837 00:24:58.427 20:24:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 106837 ']' 00:24:58.427 20:24:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 106837 00:24:58.427 20:24:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:24:58.427 20:24:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:58.427 20:24:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 106837 00:24:58.427 killing process with pid 106837 00:24:58.427 20:24:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:58.427 20:24:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:58.427 20:24:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 106837' 00:24:58.427 20:24:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 106837 00:24:58.427 20:24:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 106837 00:24:58.686 20:24:47 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:24:58.686 20:24:47 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:58.944 20:24:47 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:24:58.944 20:24:47 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:58.944 20:24:47 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:24:58.944 20:24:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:58.944 20:24:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:24:58.944 20:24:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:58.944 20:24:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:24:58.944 20:24:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:58.944 20:24:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:58.944 rmmod nvme_tcp 00:24:58.944 rmmod nvme_fabrics 00:24:58.944 rmmod nvme_keyring 00:24:58.944 20:24:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:58.944 20:24:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:24:58.944 20:24:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:24:58.944 20:24:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 106481 ']' 00:24:58.944 20:24:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 106481 00:24:58.944 20:24:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 106481 ']' 00:24:58.944 20:24:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 106481 00:24:58.944 20:24:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:24:59.203 20:24:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:59.203 20:24:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 106481 00:24:59.203 killing process with pid 106481 00:24:59.203 20:24:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:24:59.203 20:24:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:24:59.203 20:24:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 106481' 00:24:59.203 20:24:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 106481 00:24:59.203 20:24:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 106481 00:24:59.461 20:24:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:59.461 20:24:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:59.461 20:24:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:59.461 20:24:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:59.461 20:24:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:59.461 20:24:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:59.461 20:24:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:59.461 20:24:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:59.461 20:24:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:24:59.461 ************************************ 00:24:59.461 END TEST nvmf_failover 00:24:59.461 ************************************ 00:24:59.461 00:24:59.461 real 0m32.386s 00:24:59.461 user 2m5.172s 00:24:59.461 sys 0m4.817s 00:24:59.461 20:24:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:59.462 20:24:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:59.462 20:24:48 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:59.462 20:24:48 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:24:59.462 20:24:48 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:59.462 20:24:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:59.462 ************************************ 00:24:59.462 START TEST nvmf_host_discovery 00:24:59.462 ************************************ 00:24:59.462 20:24:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:59.721 * Looking for test storage... 00:24:59.721 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:59.721 20:24:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:59.721 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:24:59.721 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:59.721 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:59.721 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:59.721 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:59.721 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:59.721 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:59.721 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:59.721 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:59.721 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:59.721 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:59.721 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:24:59.721 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:24:59.721 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:59.721 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:59.721 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:59.721 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:59.721 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:59.721 20:24:48 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:59.721 20:24:48 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:59.721 20:24:48 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:59.721 20:24:48 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.721 20:24:48 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.721 20:24:48 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.721 20:24:48 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:24:59.721 20:24:48 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.721 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:24:59.721 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:59.721 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:59.721 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:59.721 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:59.721 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:59.721 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:59.721 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:59.721 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:59.721 20:24:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:24:59.721 20:24:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:24:59.721 20:24:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:24:59.721 20:24:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:24:59.721 20:24:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:24:59.721 20:24:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:24:59.721 20:24:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:24:59.721 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:59.721 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:59.721 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:59.721 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:59.721 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:59.721 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:59.721 20:24:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:59.721 20:24:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:59.721 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:24:59.721 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:24:59.721 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:24:59.721 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:24:59.721 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:24:59.721 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:24:59.721 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:59.721 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:59.721 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:59.721 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:24:59.721 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:59.721 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:59.721 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:59.721 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:59.721 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:59.721 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:59.721 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:59.721 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:59.721 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:24:59.721 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:24:59.721 Cannot find device "nvmf_tgt_br" 00:24:59.721 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # true 00:24:59.721 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:24:59.721 Cannot find device "nvmf_tgt_br2" 00:24:59.721 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # true 00:24:59.721 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:24:59.721 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:24:59.721 Cannot find device "nvmf_tgt_br" 00:24:59.721 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # true 00:24:59.721 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:24:59.721 Cannot find device "nvmf_tgt_br2" 00:24:59.721 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # true 00:24:59.721 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:24:59.721 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:24:59.721 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:59.721 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:59.721 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:24:59.721 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:59.721 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:59.721 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:24:59.721 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:24:59.721 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:59.721 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:59.721 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:59.721 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:59.721 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:59.721 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:59.721 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:59.980 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:59.980 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:24:59.980 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:24:59.980 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:24:59.980 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:24:59.980 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:59.980 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:59.980 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:59.980 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:24:59.980 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:24:59.980 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:24:59.980 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:59.980 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:59.980 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:59.980 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:59.980 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:24:59.980 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:59.980 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.106 ms 00:24:59.980 00:24:59.980 --- 10.0.0.2 ping statistics --- 00:24:59.980 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:59.980 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:24:59.980 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:24:59.980 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:59.980 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:24:59.980 00:24:59.980 --- 10.0.0.3 ping statistics --- 00:24:59.980 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:59.980 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:24:59.980 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:59.980 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:59.980 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:24:59.980 00:24:59.980 --- 10.0.0.1 ping statistics --- 00:24:59.980 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:59.980 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:24:59.980 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:59.980 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@433 -- # return 0 00:24:59.980 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:59.980 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:59.980 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:59.980 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:59.980 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:59.980 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:59.980 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:59.980 20:24:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:24:59.980 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:59.980 20:24:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:59.980 20:24:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:59.980 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=107262 00:24:59.980 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:59.980 20:24:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 107262 00:24:59.980 20:24:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 107262 ']' 00:24:59.980 20:24:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:59.980 20:24:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:59.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:59.980 20:24:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:59.980 20:24:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:59.980 20:24:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:59.980 [2024-07-14 20:24:49.013951] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:24:59.980 [2024-07-14 20:24:49.014017] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:00.239 [2024-07-14 20:24:49.150252] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:00.239 [2024-07-14 20:24:49.228924] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:00.239 [2024-07-14 20:24:49.228980] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:00.239 [2024-07-14 20:24:49.228991] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:00.239 [2024-07-14 20:24:49.229000] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:00.239 [2024-07-14 20:24:49.229007] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:00.239 [2024-07-14 20:24:49.229034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:01.175 20:24:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:01.175 20:24:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:25:01.175 20:24:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:01.175 20:24:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:01.175 20:24:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:01.175 20:24:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:01.175 20:24:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:01.175 20:24:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.175 20:24:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:01.175 [2024-07-14 20:24:50.016582] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:01.175 20:24:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.175 20:24:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:25:01.175 20:24:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.175 20:24:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:01.175 [2024-07-14 20:24:50.024834] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:01.175 20:24:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.175 20:24:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:25:01.175 20:24:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.175 20:24:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:01.175 null0 00:25:01.175 20:24:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.175 20:24:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:25:01.175 20:24:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.176 20:24:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:01.176 null1 00:25:01.176 20:24:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.176 20:24:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:25:01.176 20:24:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.176 20:24:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:01.176 20:24:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.176 20:24:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=107308 00:25:01.176 20:24:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:25:01.176 20:24:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 107308 /tmp/host.sock 00:25:01.176 20:24:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 107308 ']' 00:25:01.176 20:24:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:25:01.176 20:24:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:01.176 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:01.176 20:24:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:01.176 20:24:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:01.176 20:24:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:01.176 [2024-07-14 20:24:50.116624] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:25:01.176 [2024-07-14 20:24:50.116718] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107308 ] 00:25:01.435 [2024-07-14 20:24:50.262038] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:01.435 [2024-07-14 20:24:50.373761] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:02.002 20:24:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:02.002 20:24:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:25:02.002 20:24:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:02.002 20:24:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:25:02.002 20:24:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.002 20:24:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:02.002 20:24:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.002 20:24:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:25:02.002 20:24:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.002 20:24:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:02.002 20:24:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.002 20:24:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:25:02.002 20:24:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:25:02.002 20:24:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:02.002 20:24:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:02.002 20:24:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:02.002 20:24:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:02.002 20:24:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.002 20:24:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:02.002 20:24:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.002 20:24:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:25:02.002 20:24:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:25:02.002 20:24:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:02.002 20:24:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:02.002 20:24:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.002 20:24:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:02.002 20:24:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:02.002 20:24:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:02.002 20:24:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.261 20:24:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:25:02.261 20:24:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:25:02.261 20:24:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.261 20:24:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:02.261 20:24:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.261 20:24:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:25:02.261 20:24:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:02.261 20:24:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:02.261 20:24:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:02.261 20:24:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.261 20:24:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:02.261 20:24:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:02.261 20:24:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.261 20:24:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:25:02.261 20:24:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:25:02.261 20:24:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:02.261 20:24:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:02.261 20:24:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:02.261 20:24:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:02.261 20:24:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.261 20:24:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:02.261 20:24:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.261 20:24:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:25:02.261 20:24:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:25:02.261 20:24:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.261 20:24:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:02.261 20:24:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.261 20:24:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:25:02.261 20:24:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:02.261 20:24:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:02.261 20:24:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:02.261 20:24:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.261 20:24:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:02.261 20:24:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:02.261 20:24:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.261 20:24:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:25:02.261 20:24:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:25:02.261 20:24:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:02.262 20:24:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.262 20:24:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:02.262 20:24:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:02.262 20:24:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:02.262 20:24:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:02.262 20:24:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.521 20:24:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:25:02.521 20:24:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:02.521 20:24:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.521 20:24:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:02.521 [2024-07-14 20:24:51.373046] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:02.521 20:24:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.521 20:24:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:25:02.521 20:24:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:02.521 20:24:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:02.521 20:24:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.521 20:24:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:02.521 20:24:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:02.521 20:24:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:02.521 20:24:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.521 20:24:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:25:02.521 20:24:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:25:02.521 20:24:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:02.521 20:24:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:02.521 20:24:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.521 20:24:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:02.521 20:24:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:02.521 20:24:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:02.521 20:24:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.521 20:24:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:25:02.521 20:24:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:25:02.521 20:24:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:02.521 20:24:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:02.521 20:24:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:02.521 20:24:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:25:02.521 20:24:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:02.521 20:24:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:02.521 20:24:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:25:02.521 20:24:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:02.521 20:24:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.521 20:24:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:02.521 20:24:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:02.521 20:24:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.521 20:24:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:02.521 20:24:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:25:02.521 20:24:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:25:02.521 20:24:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:25:02.521 20:24:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:25:02.521 20:24:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.521 20:24:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:02.521 20:24:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.521 20:24:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:02.521 20:24:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:02.521 20:24:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:25:02.521 20:24:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:02.521 20:24:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:02.521 20:24:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:25:02.521 20:24:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:02.521 20:24:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:02.521 20:24:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:02.521 20:24:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:02.521 20:24:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.521 20:24:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:02.521 20:24:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.779 20:24:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == \n\v\m\e\0 ]] 00:25:02.779 20:24:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:25:03.038 [2024-07-14 20:24:52.009788] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:03.038 [2024-07-14 20:24:52.009829] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:03.038 [2024-07-14 20:24:52.009863] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:03.038 [2024-07-14 20:24:52.096915] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:03.297 [2024-07-14 20:24:52.161338] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:03.297 [2024-07-14 20:24:52.161388] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:03.555 20:24:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:03.555 20:24:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:03.555 20:24:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:25:03.555 20:24:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:03.555 20:24:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.555 20:24:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:03.555 20:24:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:03.555 20:24:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:03.555 20:24:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:03.814 20:24:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.814 20:24:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:03.814 20:24:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:25:03.814 20:24:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:03.814 20:24:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:03.814 20:24:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:25:03.814 20:24:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:03.814 20:24:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:25:03.814 20:24:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:25:03.814 20:24:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:03.814 20:24:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.814 20:24:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:03.814 20:24:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:03.814 20:24:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:03.814 20:24:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:03.814 20:24:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.814 20:24:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:25:03.814 20:24:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:25:03.814 20:24:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:03.814 20:24:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:03.814 20:24:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:25:03.814 20:24:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:03.814 20:24:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:25:03.814 20:24:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:25:03.814 20:24:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:03.814 20:24:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.814 20:24:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:03.814 20:24:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:03.814 20:24:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:03.814 20:24:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:03.814 20:24:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.814 20:24:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0 ]] 00:25:03.814 20:24:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:25:03.814 20:24:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:25:03.814 20:24:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:03.814 20:24:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:03.814 20:24:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:03.814 20:24:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:25:03.814 20:24:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:03.814 20:24:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:03.814 20:24:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:25:03.814 20:24:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:03.814 20:24:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.814 20:24:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:03.814 20:24:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:03.814 20:24:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.814 20:24:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:03.814 20:24:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:25:03.814 20:24:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:25:03.814 20:24:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:25:03.814 20:24:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:25:03.814 20:24:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.814 20:24:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:03.814 20:24:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.814 20:24:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:03.814 20:24:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:03.814 20:24:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:25:03.814 20:24:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:03.814 20:24:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:03.814 20:24:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:25:03.814 20:24:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:03.814 20:24:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:03.814 20:24:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.814 20:24:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:03.814 20:24:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:03.814 20:24:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:04.073 20:24:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.073 20:24:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:04.073 20:24:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:25:04.073 20:24:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:25:04.073 20:24:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:04.073 20:24:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:04.073 20:24:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:04.073 20:24:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:25:04.073 20:24:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:04.073 20:24:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:04.073 20:24:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:25:04.073 20:24:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:25:04.073 20:24:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.073 20:24:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:04.073 20:24:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:04.073 20:24:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.073 20:24:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:04.073 20:24:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:04.073 20:24:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:25:04.073 20:24:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:25:04.073 20:24:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:25:04.073 20:24:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.074 20:24:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:04.074 [2024-07-14 20:24:52.989807] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:04.074 [2024-07-14 20:24:52.990386] bdev_nvme.c:6966:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:04.074 [2024-07-14 20:24:52.990415] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:04.074 20:24:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.074 20:24:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:04.074 20:24:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:04.074 20:24:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:25:04.074 20:24:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:04.074 20:24:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:04.074 20:24:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:25:04.074 20:24:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:04.074 20:24:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.074 20:24:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:04.074 20:24:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:04.074 20:24:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:04.074 20:24:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:04.074 20:24:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.074 20:24:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:04.074 20:24:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:25:04.074 20:24:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:04.074 20:24:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:04.074 20:24:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:25:04.074 20:24:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:04.074 20:24:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:04.074 20:24:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:25:04.074 20:24:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:04.074 20:24:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:04.074 20:24:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.074 20:24:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:04.074 20:24:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:04.074 20:24:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:04.074 [2024-07-14 20:24:53.076465] bdev_nvme.c:6908:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:25:04.074 20:24:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.074 20:24:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:04.074 20:24:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:25:04.074 20:24:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:04.074 20:24:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:04.074 20:24:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:25:04.074 20:24:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:04.074 20:24:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:04.074 20:24:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:25:04.074 20:24:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:04.074 20:24:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.074 20:24:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:04.074 20:24:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:04.074 20:24:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:04.074 20:24:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:04.074 20:24:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.074 [2024-07-14 20:24:53.133731] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:04.074 [2024-07-14 20:24:53.133757] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:04.074 [2024-07-14 20:24:53.133780] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:04.331 20:24:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:25:04.331 20:24:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:25:05.263 20:24:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:05.263 20:24:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:05.263 20:24:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:25:05.263 20:24:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:05.263 20:24:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.263 20:24:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:05.263 20:24:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:05.263 20:24:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:05.263 20:24:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:05.263 20:24:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.263 20:24:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:25:05.263 20:24:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:25:05.263 20:24:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:25:05.263 20:24:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:05.263 20:24:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:05.263 20:24:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:05.263 20:24:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:25:05.263 20:24:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:05.263 20:24:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:05.263 20:24:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:25:05.263 20:24:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:05.263 20:24:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:05.263 20:24:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.263 20:24:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:05.263 20:24:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.263 20:24:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:05.263 20:24:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:05.263 20:24:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:25:05.263 20:24:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:25:05.263 20:24:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:05.263 20:24:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.263 20:24:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:05.263 [2024-07-14 20:24:54.294730] bdev_nvme.c:6966:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:05.263 [2024-07-14 20:24:54.294767] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:05.263 [2024-07-14 20:24:54.298582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:05.263 [2024-07-14 20:24:54.298638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.263 [2024-07-14 20:24:54.298669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:05.263 [2024-07-14 20:24:54.298678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.263 [2024-07-14 20:24:54.298688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:05.263 [2024-07-14 20:24:54.298696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.263 [2024-07-14 20:24:54.298706] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:05.263 [2024-07-14 20:24:54.298715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.263 [2024-07-14 20:24:54.298724] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249a450 is same with the state(5) to be set 00:25:05.263 20:24:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.263 20:24:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:05.263 20:24:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:05.263 20:24:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:25:05.263 20:24:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:05.263 20:24:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:05.263 20:24:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:25:05.263 20:24:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:05.263 20:24:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:05.263 20:24:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.263 20:24:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:05.263 20:24:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:05.263 20:24:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:05.263 [2024-07-14 20:24:54.308540] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249a450 (9): Bad file descriptor 00:25:05.263 20:24:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.263 [2024-07-14 20:24:54.318565] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:05.263 [2024-07-14 20:24:54.318729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.263 [2024-07-14 20:24:54.318750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249a450 with addr=10.0.0.2, port=4420 00:25:05.263 [2024-07-14 20:24:54.318761] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249a450 is same with the state(5) to be set 00:25:05.263 [2024-07-14 20:24:54.318777] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249a450 (9): Bad file descriptor 00:25:05.263 [2024-07-14 20:24:54.318802] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:05.263 [2024-07-14 20:24:54.318812] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:05.263 [2024-07-14 20:24:54.318823] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:05.263 [2024-07-14 20:24:54.318837] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:05.263 [2024-07-14 20:24:54.328648] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:05.263 [2024-07-14 20:24:54.328751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.263 [2024-07-14 20:24:54.328771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249a450 with addr=10.0.0.2, port=4420 00:25:05.263 [2024-07-14 20:24:54.328781] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249a450 is same with the state(5) to be set 00:25:05.263 [2024-07-14 20:24:54.328796] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249a450 (9): Bad file descriptor 00:25:05.263 [2024-07-14 20:24:54.328818] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:05.263 [2024-07-14 20:24:54.328828] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:05.263 [2024-07-14 20:24:54.328837] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:05.263 [2024-07-14 20:24:54.328850] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:05.263 [2024-07-14 20:24:54.338717] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:05.263 [2024-07-14 20:24:54.338812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.263 [2024-07-14 20:24:54.338831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249a450 with addr=10.0.0.2, port=4420 00:25:05.263 [2024-07-14 20:24:54.338841] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249a450 is same with the state(5) to be set 00:25:05.263 [2024-07-14 20:24:54.338857] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249a450 (9): Bad file descriptor 00:25:05.263 [2024-07-14 20:24:54.338952] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:05.263 [2024-07-14 20:24:54.338969] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:05.263 [2024-07-14 20:24:54.338978] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:05.263 [2024-07-14 20:24:54.338992] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:05.523 [2024-07-14 20:24:54.348782] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:05.523 [2024-07-14 20:24:54.348905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.523 [2024-07-14 20:24:54.348926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249a450 with addr=10.0.0.2, port=4420 00:25:05.523 [2024-07-14 20:24:54.348937] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249a450 is same with the state(5) to be set 00:25:05.523 [2024-07-14 20:24:54.348953] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249a450 (9): Bad file descriptor 00:25:05.523 [2024-07-14 20:24:54.348975] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:05.523 [2024-07-14 20:24:54.348985] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:05.523 [2024-07-14 20:24:54.348993] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:05.523 [2024-07-14 20:24:54.349006] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:05.523 [2024-07-14 20:24:54.358846] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:05.523 [2024-07-14 20:24:54.359010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.523 [2024-07-14 20:24:54.359031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249a450 with addr=10.0.0.2, port=4420 00:25:05.523 [2024-07-14 20:24:54.359042] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249a450 is same with the state(5) to be set 00:25:05.523 [2024-07-14 20:24:54.359068] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249a450 (9): Bad file descriptor 00:25:05.523 [2024-07-14 20:24:54.359084] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:05.523 [2024-07-14 20:24:54.359092] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:05.523 [2024-07-14 20:24:54.359101] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:05.523 [2024-07-14 20:24:54.359116] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:05.523 20:24:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:05.523 20:24:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:25:05.523 20:24:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:05.523 20:24:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:05.523 20:24:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:25:05.523 20:24:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:05.523 20:24:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:05.523 20:24:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:25:05.523 20:24:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:05.523 20:24:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.523 20:24:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:05.523 20:24:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:05.523 20:24:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:05.523 20:24:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:05.523 [2024-07-14 20:24:54.368996] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:05.523 [2024-07-14 20:24:54.369094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.523 [2024-07-14 20:24:54.369115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249a450 with addr=10.0.0.2, port=4420 00:25:05.523 [2024-07-14 20:24:54.369126] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249a450 is same with the state(5) to be set 00:25:05.523 [2024-07-14 20:24:54.369142] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249a450 (9): Bad file descriptor 00:25:05.523 [2024-07-14 20:24:54.369156] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:05.523 [2024-07-14 20:24:54.369188] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:05.523 [2024-07-14 20:24:54.369197] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:05.523 [2024-07-14 20:24:54.369228] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:05.523 [2024-07-14 20:24:54.379048] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:05.523 [2024-07-14 20:24:54.379136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.523 [2024-07-14 20:24:54.379157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249a450 with addr=10.0.0.2, port=4420 00:25:05.523 [2024-07-14 20:24:54.379168] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249a450 is same with the state(5) to be set 00:25:05.523 [2024-07-14 20:24:54.379184] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249a450 (9): Bad file descriptor 00:25:05.523 [2024-07-14 20:24:54.379199] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:05.523 [2024-07-14 20:24:54.379208] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:05.523 [2024-07-14 20:24:54.379217] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:05.523 [2024-07-14 20:24:54.379272] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:05.523 [2024-07-14 20:24:54.381021] bdev_nvme.c:6771:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:25:05.523 [2024-07-14 20:24:54.381051] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:05.523 20:24:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.523 20:24:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:05.523 20:24:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:25:05.523 20:24:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:05.523 20:24:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:05.523 20:24:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:25:05.523 20:24:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:05.523 20:24:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:25:05.523 20:24:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:25:05.523 20:24:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:05.523 20:24:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.523 20:24:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:05.523 20:24:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:05.523 20:24:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:05.523 20:24:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:05.523 20:24:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.523 20:24:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4421 == \4\4\2\1 ]] 00:25:05.523 20:24:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:25:05.523 20:24:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:25:05.523 20:24:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:05.523 20:24:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:05.523 20:24:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:05.523 20:24:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:25:05.523 20:24:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:05.523 20:24:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:05.523 20:24:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:25:05.523 20:24:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:05.523 20:24:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:05.523 20:24:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.523 20:24:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:05.523 20:24:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.523 20:24:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:05.523 20:24:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:05.523 20:24:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:25:05.523 20:24:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:25:05.523 20:24:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:25:05.523 20:24:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.523 20:24:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:05.523 20:24:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.523 20:24:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:25:05.523 20:24:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:25:05.523 20:24:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:25:05.523 20:24:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:05.523 20:24:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:25:05.523 20:24:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:25:05.523 20:24:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:05.523 20:24:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.523 20:24:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:05.523 20:24:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:05.523 20:24:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:05.523 20:24:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:05.523 20:24:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.782 20:24:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:25:05.782 20:24:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:25:05.782 20:24:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:25:05.782 20:24:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:25:05.782 20:24:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:25:05.782 20:24:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:05.782 20:24:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:25:05.782 20:24:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:25:05.782 20:24:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:05.782 20:24:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.782 20:24:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:05.782 20:24:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:05.782 20:24:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:05.782 20:24:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:05.782 20:24:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.782 20:24:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:25:05.782 20:24:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:25:05.782 20:24:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:25:05.782 20:24:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:25:05.782 20:24:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:05.782 20:24:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:05.782 20:24:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:25:05.782 20:24:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:05.782 20:24:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:05.782 20:24:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:25:05.782 20:24:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:05.782 20:24:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.782 20:24:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:05.782 20:24:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:05.782 20:24:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.782 20:24:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:25:05.782 20:24:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:25:05.782 20:24:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:25:05.782 20:24:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:25:05.782 20:24:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:05.782 20:24:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.782 20:24:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:06.717 [2024-07-14 20:24:55.740267] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:06.717 [2024-07-14 20:24:55.740300] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:06.717 [2024-07-14 20:24:55.740320] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:06.975 [2024-07-14 20:24:55.826411] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:25:06.975 [2024-07-14 20:24:55.885733] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:06.975 [2024-07-14 20:24:55.885779] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:06.975 20:24:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.975 20:24:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:06.975 20:24:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:25:06.975 20:24:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:06.975 20:24:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:06.975 20:24:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:06.975 20:24:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:06.975 20:24:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:06.975 20:24:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:06.975 20:24:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.975 20:24:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:06.975 2024/07/14 20:24:55 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:25:06.975 request: 00:25:06.975 { 00:25:06.975 "method": "bdev_nvme_start_discovery", 00:25:06.975 "params": { 00:25:06.975 "name": "nvme", 00:25:06.975 "trtype": "tcp", 00:25:06.975 "traddr": "10.0.0.2", 00:25:06.975 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:06.975 "adrfam": "ipv4", 00:25:06.975 "trsvcid": "8009", 00:25:06.975 "wait_for_attach": true 00:25:06.975 } 00:25:06.975 } 00:25:06.975 Got JSON-RPC error response 00:25:06.975 GoRPCClient: error on JSON-RPC call 00:25:06.975 20:24:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:06.975 20:24:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:25:06.975 20:24:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:06.975 20:24:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:06.975 20:24:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:06.975 20:24:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:25:06.975 20:24:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:06.975 20:24:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:06.975 20:24:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.975 20:24:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:06.975 20:24:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:06.975 20:24:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:06.975 20:24:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.975 20:24:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:25:06.975 20:24:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:25:06.975 20:24:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:06.975 20:24:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:06.975 20:24:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.975 20:24:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:06.975 20:24:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:06.975 20:24:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:06.975 20:24:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.975 20:24:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:06.975 20:24:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:06.975 20:24:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:25:06.975 20:24:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:06.975 20:24:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:06.975 20:24:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:06.975 20:24:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:06.975 20:24:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:06.975 20:24:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:06.975 20:24:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.975 20:24:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:06.975 2024/07/14 20:24:56 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:25:06.975 request: 00:25:06.975 { 00:25:06.975 "method": "bdev_nvme_start_discovery", 00:25:06.975 "params": { 00:25:06.975 "name": "nvme_second", 00:25:06.975 "trtype": "tcp", 00:25:06.975 "traddr": "10.0.0.2", 00:25:06.975 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:06.975 "adrfam": "ipv4", 00:25:06.975 "trsvcid": "8009", 00:25:06.975 "wait_for_attach": true 00:25:06.975 } 00:25:06.975 } 00:25:06.975 Got JSON-RPC error response 00:25:06.975 GoRPCClient: error on JSON-RPC call 00:25:06.975 20:24:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:06.975 20:24:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:25:06.975 20:24:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:06.975 20:24:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:06.975 20:24:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:06.975 20:24:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:25:06.975 20:24:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:06.975 20:24:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:06.975 20:24:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.975 20:24:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:06.975 20:24:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:06.975 20:24:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:06.975 20:24:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.233 20:24:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:25:07.233 20:24:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:25:07.233 20:24:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:07.233 20:24:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:07.233 20:24:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.233 20:24:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:07.233 20:24:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:07.233 20:24:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:07.233 20:24:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.233 20:24:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:07.233 20:24:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:07.233 20:24:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:25:07.233 20:24:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:07.233 20:24:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:07.233 20:24:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:07.233 20:24:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:07.233 20:24:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:07.233 20:24:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:07.233 20:24:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.233 20:24:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.165 [2024-07-14 20:24:57.159649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.166 [2024-07-14 20:24:57.159727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24cf7a0 with addr=10.0.0.2, port=8010 00:25:08.166 [2024-07-14 20:24:57.159755] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:08.166 [2024-07-14 20:24:57.159765] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:08.166 [2024-07-14 20:24:57.159773] bdev_nvme.c:7046:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:09.142 [2024-07-14 20:24:58.159658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.142 [2024-07-14 20:24:58.159736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24cbc80 with addr=10.0.0.2, port=8010 00:25:09.142 [2024-07-14 20:24:58.159766] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:09.142 [2024-07-14 20:24:58.159777] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:09.142 [2024-07-14 20:24:58.159787] bdev_nvme.c:7046:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:10.080 [2024-07-14 20:24:59.159487] bdev_nvme.c:7027:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:25:10.080 2024/07/14 20:24:59 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 attach_timeout_ms:3000 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8010 trtype:tcp], err: error received for bdev_nvme_start_discovery method, err: Code=-110 Msg=Connection timed out 00:25:10.080 request: 00:25:10.080 { 00:25:10.080 "method": "bdev_nvme_start_discovery", 00:25:10.080 "params": { 00:25:10.080 "name": "nvme_second", 00:25:10.080 "trtype": "tcp", 00:25:10.080 "traddr": "10.0.0.2", 00:25:10.080 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:10.080 "adrfam": "ipv4", 00:25:10.080 "trsvcid": "8010", 00:25:10.080 "attach_timeout_ms": 3000 00:25:10.080 } 00:25:10.080 } 00:25:10.080 Got JSON-RPC error response 00:25:10.080 GoRPCClient: error on JSON-RPC call 00:25:10.339 20:24:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:10.339 20:24:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:25:10.339 20:24:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:10.339 20:24:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:10.339 20:24:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:10.339 20:24:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:25:10.339 20:24:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:10.339 20:24:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:10.339 20:24:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:10.339 20:24:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.339 20:24:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:10.339 20:24:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:10.339 20:24:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.339 20:24:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:25:10.339 20:24:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:25:10.339 20:24:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 107308 00:25:10.339 20:24:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:25:10.339 20:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:10.339 20:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:25:10.339 20:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:10.339 20:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:25:10.339 20:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:10.339 20:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:10.339 rmmod nvme_tcp 00:25:10.339 rmmod nvme_fabrics 00:25:10.339 rmmod nvme_keyring 00:25:10.339 20:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:10.339 20:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:25:10.339 20:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:25:10.339 20:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 107262 ']' 00:25:10.339 20:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 107262 00:25:10.339 20:24:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@946 -- # '[' -z 107262 ']' 00:25:10.339 20:24:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@950 -- # kill -0 107262 00:25:10.339 20:24:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # uname 00:25:10.339 20:24:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:10.339 20:24:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 107262 00:25:10.339 killing process with pid 107262 00:25:10.339 20:24:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:25:10.339 20:24:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:25:10.339 20:24:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 107262' 00:25:10.339 20:24:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@965 -- # kill 107262 00:25:10.339 20:24:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@970 -- # wait 107262 00:25:10.905 20:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:10.905 20:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:10.905 20:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:10.905 20:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:10.905 20:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:10.905 20:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:10.905 20:24:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:10.905 20:24:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:10.905 20:24:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:25:10.905 00:25:10.905 real 0m11.345s 00:25:10.905 user 0m22.076s 00:25:10.905 sys 0m1.839s 00:25:10.905 20:24:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:10.905 ************************************ 00:25:10.905 20:24:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:10.905 END TEST nvmf_host_discovery 00:25:10.906 ************************************ 00:25:10.906 20:24:59 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:10.906 20:24:59 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:25:10.906 20:24:59 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:10.906 20:24:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:10.906 ************************************ 00:25:10.906 START TEST nvmf_host_multipath_status 00:25:10.906 ************************************ 00:25:10.906 20:24:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:10.906 * Looking for test storage... 00:25:10.906 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:25:10.906 20:24:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:10.906 20:24:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:25:10.906 20:24:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:10.906 20:24:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:10.906 20:24:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:10.906 20:24:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:10.906 20:24:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:10.906 20:24:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:10.906 20:24:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:10.906 20:24:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:10.906 20:24:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:10.906 20:24:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:11.165 20:24:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:25:11.165 20:24:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:25:11.165 20:24:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:11.165 20:24:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:11.165 20:24:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:11.165 20:24:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:11.165 20:24:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:11.165 20:24:59 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:11.165 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:11.165 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:11.165 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.165 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.165 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.165 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:25:11.165 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.165 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:25:11.165 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:11.165 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:11.165 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:11.165 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:11.165 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:11.165 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:11.165 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:11.165 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:11.165 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:11.165 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:11.165 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:11.165 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:25:11.165 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:11.165 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:25:11.165 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:25:11.165 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:11.165 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:11.165 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:11.165 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:11.165 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:11.165 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:11.165 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:11.165 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:11.165 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:25:11.165 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:25:11.165 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:25:11.165 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:25:11.165 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:25:11.165 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # nvmf_veth_init 00:25:11.165 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:11.165 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:11.165 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:11.165 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:25:11.165 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:11.165 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:11.165 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:11.165 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:11.165 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:11.165 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:11.165 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:11.165 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:11.166 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:25:11.166 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:25:11.166 Cannot find device "nvmf_tgt_br" 00:25:11.166 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # true 00:25:11.166 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:25:11.166 Cannot find device "nvmf_tgt_br2" 00:25:11.166 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # true 00:25:11.166 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:25:11.166 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:25:11.166 Cannot find device "nvmf_tgt_br" 00:25:11.166 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # true 00:25:11.166 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:25:11.166 Cannot find device "nvmf_tgt_br2" 00:25:11.166 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # true 00:25:11.166 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:25:11.166 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:25:11.166 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:11.166 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:11.166 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:25:11.166 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:11.166 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:11.166 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:25:11.166 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:25:11.166 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:11.166 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:11.166 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:11.166 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:11.166 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:11.166 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:11.166 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:11.166 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:11.166 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:25:11.166 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:25:11.166 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:25:11.166 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:25:11.166 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:11.425 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:11.425 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:11.425 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:25:11.425 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:25:11.425 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:25:11.425 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:11.425 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:11.425 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:11.425 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:11.425 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:25:11.425 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:11.425 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:25:11.425 00:25:11.425 --- 10.0.0.2 ping statistics --- 00:25:11.425 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:11.425 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:25:11.425 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:25:11.425 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:11.425 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:25:11.425 00:25:11.425 --- 10.0.0.3 ping statistics --- 00:25:11.425 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:11.425 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:25:11.425 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:11.425 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:11.425 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:25:11.425 00:25:11.425 --- 10.0.0.1 ping statistics --- 00:25:11.425 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:11.425 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:25:11.425 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:11.425 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@433 -- # return 0 00:25:11.425 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:11.425 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:11.425 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:11.425 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:11.425 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:11.425 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:11.425 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:11.425 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:25:11.425 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:11.425 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:11.425 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:11.425 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=107797 00:25:11.425 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:25:11.425 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 107797 00:25:11.425 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 107797 ']' 00:25:11.425 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:11.425 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:11.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:11.425 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:11.426 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:11.426 20:25:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:11.426 [2024-07-14 20:25:00.412943] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:25:11.426 [2024-07-14 20:25:00.413018] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:11.684 [2024-07-14 20:25:00.546145] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:11.684 [2024-07-14 20:25:00.664348] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:11.684 [2024-07-14 20:25:00.664933] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:11.684 [2024-07-14 20:25:00.665179] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:11.684 [2024-07-14 20:25:00.665525] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:11.684 [2024-07-14 20:25:00.665796] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:11.684 [2024-07-14 20:25:00.666126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:11.684 [2024-07-14 20:25:00.666134] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:12.618 20:25:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:12.618 20:25:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:25:12.618 20:25:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:12.618 20:25:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:12.618 20:25:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:12.618 20:25:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:12.618 20:25:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=107797 00:25:12.618 20:25:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:12.875 [2024-07-14 20:25:01.723702] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:12.875 20:25:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:13.133 Malloc0 00:25:13.133 20:25:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:25:13.417 20:25:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:13.417 20:25:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:13.675 [2024-07-14 20:25:02.674971] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:13.675 20:25:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:13.933 [2024-07-14 20:25:02.939155] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:13.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:13.933 20:25:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:25:13.933 20:25:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=107895 00:25:13.933 20:25:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:13.933 20:25:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 107895 /var/tmp/bdevperf.sock 00:25:13.933 20:25:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 107895 ']' 00:25:13.933 20:25:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:13.933 20:25:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:13.933 20:25:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:13.933 20:25:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:13.933 20:25:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:14.870 20:25:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:14.870 20:25:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:25:14.870 20:25:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:25:15.128 20:25:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:25:15.385 Nvme0n1 00:25:15.385 20:25:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:15.950 Nvme0n1 00:25:15.950 20:25:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:25:15.950 20:25:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:25:17.849 20:25:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:25:17.849 20:25:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:18.107 20:25:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:18.365 20:25:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:25:19.301 20:25:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:25:19.301 20:25:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:19.301 20:25:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:19.301 20:25:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:19.560 20:25:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:19.560 20:25:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:19.560 20:25:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:19.560 20:25:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:19.819 20:25:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:19.819 20:25:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:19.819 20:25:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:19.819 20:25:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:20.387 20:25:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:20.387 20:25:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:20.387 20:25:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:20.387 20:25:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:20.387 20:25:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:20.387 20:25:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:20.387 20:25:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:20.387 20:25:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:20.647 20:25:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:20.647 20:25:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:20.647 20:25:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:20.647 20:25:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:20.906 20:25:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:20.906 20:25:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:25:20.906 20:25:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:21.165 20:25:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:21.424 20:25:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:25:22.361 20:25:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:25:22.361 20:25:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:22.361 20:25:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:22.361 20:25:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:22.620 20:25:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:22.620 20:25:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:22.620 20:25:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:22.620 20:25:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:22.879 20:25:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:22.879 20:25:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:22.879 20:25:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:22.879 20:25:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:23.138 20:25:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:23.138 20:25:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:23.138 20:25:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:23.138 20:25:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:23.397 20:25:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:23.397 20:25:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:23.397 20:25:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:23.397 20:25:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:23.657 20:25:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:23.657 20:25:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:23.657 20:25:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:23.657 20:25:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:23.915 20:25:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:23.915 20:25:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:25:23.916 20:25:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:24.174 20:25:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:24.740 20:25:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:25:25.674 20:25:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:25:25.674 20:25:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:25.674 20:25:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:25.674 20:25:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:25.933 20:25:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:25.933 20:25:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:25.933 20:25:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:25.933 20:25:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:26.191 20:25:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:26.191 20:25:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:26.191 20:25:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:26.191 20:25:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:26.449 20:25:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:26.449 20:25:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:26.449 20:25:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:26.449 20:25:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:26.708 20:25:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:26.708 20:25:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:26.708 20:25:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:26.708 20:25:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:26.965 20:25:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:26.965 20:25:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:26.965 20:25:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:26.965 20:25:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:27.223 20:25:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:27.223 20:25:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:25:27.223 20:25:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:27.482 20:25:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:27.740 20:25:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:25:28.675 20:25:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:25:28.675 20:25:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:28.675 20:25:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:28.675 20:25:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:28.933 20:25:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:28.933 20:25:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:28.933 20:25:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:28.933 20:25:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:29.192 20:25:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:29.192 20:25:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:29.192 20:25:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:29.192 20:25:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:29.450 20:25:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:29.450 20:25:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:29.450 20:25:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:29.450 20:25:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:29.708 20:25:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:29.708 20:25:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:29.708 20:25:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:29.708 20:25:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:29.965 20:25:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:29.965 20:25:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:29.965 20:25:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:29.965 20:25:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:30.223 20:25:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:30.223 20:25:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:25:30.223 20:25:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:30.496 20:25:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:30.766 20:25:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:25:31.699 20:25:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:25:31.699 20:25:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:31.699 20:25:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:31.699 20:25:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:31.957 20:25:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:31.957 20:25:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:31.957 20:25:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:31.957 20:25:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:32.216 20:25:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:32.216 20:25:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:32.216 20:25:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:32.216 20:25:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:32.497 20:25:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:32.497 20:25:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:32.497 20:25:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:32.497 20:25:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:32.756 20:25:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:32.756 20:25:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:32.756 20:25:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:32.756 20:25:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:33.323 20:25:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:33.323 20:25:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:33.323 20:25:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:33.323 20:25:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:33.323 20:25:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:33.323 20:25:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:25:33.323 20:25:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:33.582 20:25:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:33.841 20:25:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:25:34.775 20:25:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:25:34.775 20:25:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:34.775 20:25:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:34.775 20:25:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:35.033 20:25:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:35.033 20:25:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:35.033 20:25:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:35.033 20:25:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:35.291 20:25:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:35.291 20:25:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:35.291 20:25:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:35.291 20:25:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:35.549 20:25:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:35.549 20:25:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:35.549 20:25:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:35.549 20:25:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:35.808 20:25:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:35.808 20:25:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:35.808 20:25:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:35.808 20:25:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:36.067 20:25:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:36.067 20:25:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:36.067 20:25:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:36.067 20:25:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:36.326 20:25:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:36.326 20:25:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:25:36.585 20:25:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:25:36.585 20:25:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:36.844 20:25:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:37.102 20:25:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:25:38.038 20:25:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:25:38.038 20:25:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:38.038 20:25:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:38.038 20:25:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:38.297 20:25:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:38.297 20:25:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:38.297 20:25:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:38.297 20:25:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:38.556 20:25:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:38.556 20:25:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:38.556 20:25:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:38.556 20:25:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:38.815 20:25:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:38.815 20:25:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:38.815 20:25:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:38.815 20:25:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:39.074 20:25:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:39.074 20:25:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:39.074 20:25:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:39.074 20:25:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:39.333 20:25:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:39.333 20:25:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:39.333 20:25:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:39.333 20:25:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:39.593 20:25:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:39.593 20:25:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:25:39.593 20:25:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:39.852 20:25:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:40.110 20:25:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:25:41.045 20:25:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:25:41.045 20:25:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:41.045 20:25:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:41.045 20:25:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:41.304 20:25:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:41.304 20:25:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:41.304 20:25:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:41.304 20:25:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:41.562 20:25:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:41.562 20:25:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:41.562 20:25:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:41.562 20:25:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:41.820 20:25:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:41.820 20:25:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:41.820 20:25:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:41.820 20:25:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:42.078 20:25:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:42.078 20:25:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:42.078 20:25:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:42.078 20:25:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:42.337 20:25:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:42.337 20:25:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:42.337 20:25:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:42.337 20:25:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:42.595 20:25:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:42.595 20:25:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:25:42.595 20:25:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:42.853 20:25:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:43.111 20:25:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:25:44.484 20:25:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:25:44.484 20:25:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:44.484 20:25:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:44.484 20:25:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:44.484 20:25:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:44.484 20:25:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:44.484 20:25:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:44.484 20:25:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:44.742 20:25:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:44.742 20:25:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:44.742 20:25:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:44.742 20:25:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:45.001 20:25:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:45.001 20:25:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:45.001 20:25:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:45.001 20:25:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:45.259 20:25:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:45.259 20:25:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:45.259 20:25:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:45.259 20:25:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:45.517 20:25:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:45.517 20:25:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:45.517 20:25:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:45.517 20:25:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:45.776 20:25:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:45.776 20:25:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:25:45.776 20:25:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:46.033 20:25:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:46.292 20:25:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:25:47.226 20:25:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:25:47.226 20:25:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:47.226 20:25:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:47.226 20:25:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:47.484 20:25:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:47.484 20:25:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:47.484 20:25:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:47.484 20:25:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:47.743 20:25:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:47.743 20:25:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:47.743 20:25:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:47.743 20:25:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:48.013 20:25:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:48.013 20:25:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:48.013 20:25:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:48.013 20:25:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:48.284 20:25:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:48.284 20:25:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:48.284 20:25:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:48.284 20:25:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:48.542 20:25:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:48.542 20:25:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:48.542 20:25:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:48.542 20:25:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:48.799 20:25:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:48.799 20:25:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 107895 00:25:48.799 20:25:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 107895 ']' 00:25:48.799 20:25:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 107895 00:25:48.799 20:25:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:25:48.799 20:25:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:48.799 20:25:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 107895 00:25:48.799 killing process with pid 107895 00:25:48.799 20:25:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:25:48.799 20:25:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:25:48.799 20:25:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 107895' 00:25:48.799 20:25:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 107895 00:25:48.799 20:25:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 107895 00:25:49.069 Connection closed with partial response: 00:25:49.069 00:25:49.069 00:25:49.069 20:25:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 107895 00:25:49.069 20:25:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:25:49.069 [2024-07-14 20:25:03.002100] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:25:49.069 [2024-07-14 20:25:03.002198] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107895 ] 00:25:49.069 [2024-07-14 20:25:03.142101] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:49.069 [2024-07-14 20:25:03.260140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:49.069 Running I/O for 90 seconds... 00:25:49.069 [2024-07-14 20:25:19.494750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.069 [2024-07-14 20:25:19.494831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:49.069 [2024-07-14 20:25:19.494940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:102968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.069 [2024-07-14 20:25:19.494963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:49.069 [2024-07-14 20:25:19.494985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.069 [2024-07-14 20:25:19.495000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:49.069 [2024-07-14 20:25:19.495020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.069 [2024-07-14 20:25:19.495033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:49.069 [2024-07-14 20:25:19.495054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:102992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.069 [2024-07-14 20:25:19.495068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:49.069 [2024-07-14 20:25:19.495087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:103000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.069 [2024-07-14 20:25:19.495101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:49.069 [2024-07-14 20:25:19.495121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:103008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.069 [2024-07-14 20:25:19.495134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:49.069 [2024-07-14 20:25:19.495154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:103016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.069 [2024-07-14 20:25:19.495168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:49.069 [2024-07-14 20:25:19.495187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:103024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.069 [2024-07-14 20:25:19.495200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:49.069 [2024-07-14 20:25:19.495220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:103032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.069 [2024-07-14 20:25:19.495233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:49.069 [2024-07-14 20:25:19.495273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:103040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.069 [2024-07-14 20:25:19.495359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:49.069 [2024-07-14 20:25:19.495615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:103048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.069 [2024-07-14 20:25:19.495637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:49.069 [2024-07-14 20:25:19.495661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:103056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.069 [2024-07-14 20:25:19.495675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:49.069 [2024-07-14 20:25:19.495695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:103064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.069 [2024-07-14 20:25:19.495708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:49.069 [2024-07-14 20:25:19.495727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:103072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.069 [2024-07-14 20:25:19.495739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:49.069 [2024-07-14 20:25:19.495758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:103080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.069 [2024-07-14 20:25:19.495772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:49.069 [2024-07-14 20:25:19.495790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:103088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.069 [2024-07-14 20:25:19.495803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:49.069 [2024-07-14 20:25:19.495822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:103096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.069 [2024-07-14 20:25:19.495834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:49.069 [2024-07-14 20:25:19.495853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:103104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.069 [2024-07-14 20:25:19.495866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:49.069 [2024-07-14 20:25:19.495885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:103112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.069 [2024-07-14 20:25:19.495897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:49.069 [2024-07-14 20:25:19.495916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:103120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.069 [2024-07-14 20:25:19.495928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:49.069 [2024-07-14 20:25:19.495980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:103128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.070 [2024-07-14 20:25:19.495995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:49.070 [2024-07-14 20:25:19.496015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:103136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.070 [2024-07-14 20:25:19.496027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:49.070 [2024-07-14 20:25:19.496060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:103144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.070 [2024-07-14 20:25:19.496078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:49.070 [2024-07-14 20:25:19.496099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:103152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.070 [2024-07-14 20:25:19.496113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:49.070 [2024-07-14 20:25:19.496133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:103552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.070 [2024-07-14 20:25:19.496147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:49.070 [2024-07-14 20:25:19.496167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:103560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.070 [2024-07-14 20:25:19.496181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:49.070 [2024-07-14 20:25:19.496200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:103568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.070 [2024-07-14 20:25:19.496214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:49.070 [2024-07-14 20:25:19.496233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:103576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.070 [2024-07-14 20:25:19.496246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:49.070 [2024-07-14 20:25:19.496266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:103584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.070 [2024-07-14 20:25:19.496280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:49.070 [2024-07-14 20:25:19.496313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:103592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.070 [2024-07-14 20:25:19.496327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:49.070 [2024-07-14 20:25:19.496347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:103600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.070 [2024-07-14 20:25:19.496375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:49.070 [2024-07-14 20:25:19.498748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:103608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.070 [2024-07-14 20:25:19.498778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:49.070 [2024-07-14 20:25:19.498805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:103160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.070 [2024-07-14 20:25:19.498819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:49.070 [2024-07-14 20:25:19.498841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:103168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.070 [2024-07-14 20:25:19.498920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:49.070 [2024-07-14 20:25:19.498960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:103176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.070 [2024-07-14 20:25:19.498976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:49.070 [2024-07-14 20:25:19.499000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.070 [2024-07-14 20:25:19.499015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:49.070 [2024-07-14 20:25:19.499039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:103192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.070 [2024-07-14 20:25:19.499053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:49.070 [2024-07-14 20:25:19.499077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:103200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.070 [2024-07-14 20:25:19.499091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:49.070 [2024-07-14 20:25:19.499115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:103208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.070 [2024-07-14 20:25:19.499129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:49.070 [2024-07-14 20:25:19.499153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:103216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.070 [2024-07-14 20:25:19.499167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:49.070 [2024-07-14 20:25:19.499191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:103224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.070 [2024-07-14 20:25:19.499204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:49.070 [2024-07-14 20:25:19.499276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:103232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.070 [2024-07-14 20:25:19.499312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:49.070 [2024-07-14 20:25:19.499338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:103240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.070 [2024-07-14 20:25:19.499367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:49.070 [2024-07-14 20:25:19.499404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:103248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.070 [2024-07-14 20:25:19.499417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:49.070 [2024-07-14 20:25:19.499439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:103256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.070 [2024-07-14 20:25:19.499452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:49.070 [2024-07-14 20:25:19.499474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:103264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.070 [2024-07-14 20:25:19.499487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:49.070 [2024-07-14 20:25:19.499518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:103272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.070 [2024-07-14 20:25:19.499532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:49.070 [2024-07-14 20:25:19.499554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:103280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.070 [2024-07-14 20:25:19.499567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:49.070 [2024-07-14 20:25:19.499589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:103288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.070 [2024-07-14 20:25:19.499602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:49.070 [2024-07-14 20:25:19.499624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:103296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.070 [2024-07-14 20:25:19.499636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:49.070 [2024-07-14 20:25:19.499658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:103304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.070 [2024-07-14 20:25:19.499671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:49.070 [2024-07-14 20:25:19.499693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:103312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.070 [2024-07-14 20:25:19.499705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:49.070 [2024-07-14 20:25:19.499727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:103320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.070 [2024-07-14 20:25:19.499740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:49.070 [2024-07-14 20:25:19.499762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:103328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.070 [2024-07-14 20:25:19.499774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:49.070 [2024-07-14 20:25:19.499796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:103336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.070 [2024-07-14 20:25:19.499809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:49.070 [2024-07-14 20:25:19.499830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:103344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.070 [2024-07-14 20:25:19.499843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:49.070 [2024-07-14 20:25:19.499865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:103352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.070 [2024-07-14 20:25:19.499878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:49.070 [2024-07-14 20:25:19.499900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:103360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.070 [2024-07-14 20:25:19.499923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:49.070 [2024-07-14 20:25:19.499965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:103368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.070 [2024-07-14 20:25:19.499986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:49.070 [2024-07-14 20:25:19.500009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:103376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.070 [2024-07-14 20:25:19.500023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:49.070 [2024-07-14 20:25:19.500047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:103384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.070 [2024-07-14 20:25:19.500061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:49.070 [2024-07-14 20:25:19.500083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:103392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.070 [2024-07-14 20:25:19.500097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:49.070 [2024-07-14 20:25:19.500120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:103400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.070 [2024-07-14 20:25:19.500133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:49.070 [2024-07-14 20:25:19.500155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:103408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.070 [2024-07-14 20:25:19.500168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:49.070 [2024-07-14 20:25:19.500191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:103416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.070 [2024-07-14 20:25:19.500204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:49.070 [2024-07-14 20:25:19.500227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:103424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.070 [2024-07-14 20:25:19.500241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:49.070 [2024-07-14 20:25:19.500263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:103432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.070 [2024-07-14 20:25:19.500277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:49.070 [2024-07-14 20:25:19.500301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:103440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.070 [2024-07-14 20:25:19.500314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:49.070 [2024-07-14 20:25:19.500337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:103448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.070 [2024-07-14 20:25:19.500350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:49.070 [2024-07-14 20:25:19.500373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:103456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.070 [2024-07-14 20:25:19.500401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:49.070 [2024-07-14 20:25:19.500424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:103464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.070 [2024-07-14 20:25:19.500443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:49.070 [2024-07-14 20:25:19.500466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:103472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.070 [2024-07-14 20:25:19.500480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:49.070 [2024-07-14 20:25:19.500502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:103480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.070 [2024-07-14 20:25:19.500517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:49.071 [2024-07-14 20:25:19.500539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:103488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.071 [2024-07-14 20:25:19.500552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:49.071 [2024-07-14 20:25:19.500573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:103496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.071 [2024-07-14 20:25:19.500587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:49.071 [2024-07-14 20:25:19.500609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:103504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.071 [2024-07-14 20:25:19.500622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:49.071 [2024-07-14 20:25:19.500645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:103512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.071 [2024-07-14 20:25:19.500658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:49.071 [2024-07-14 20:25:19.500680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:103520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.071 [2024-07-14 20:25:19.500693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:49.071 [2024-07-14 20:25:19.500716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:103528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.071 [2024-07-14 20:25:19.500729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:49.071 [2024-07-14 20:25:19.501939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:103536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.071 [2024-07-14 20:25:19.501960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:49.071 [2024-07-14 20:25:19.501988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:103616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.071 [2024-07-14 20:25:19.502003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:49.071 [2024-07-14 20:25:19.502027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:103624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.071 [2024-07-14 20:25:19.502041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:49.071 [2024-07-14 20:25:19.502065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:103632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.071 [2024-07-14 20:25:19.502078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:49.071 [2024-07-14 20:25:19.502112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:103640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.071 [2024-07-14 20:25:19.502126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:49.071 [2024-07-14 20:25:19.502150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:103648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.071 [2024-07-14 20:25:19.502163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:49.071 [2024-07-14 20:25:19.502187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:103656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.071 [2024-07-14 20:25:19.502200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:49.071 [2024-07-14 20:25:19.502224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:103664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.071 [2024-07-14 20:25:19.502237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:49.071 [2024-07-14 20:25:19.502452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:103672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.071 [2024-07-14 20:25:19.502471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:49.071 [2024-07-14 20:25:19.502499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:103680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.071 [2024-07-14 20:25:19.502513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:49.071 [2024-07-14 20:25:19.502539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.071 [2024-07-14 20:25:19.502570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:49.071 [2024-07-14 20:25:19.502596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:103696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.071 [2024-07-14 20:25:19.502609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:49.071 [2024-07-14 20:25:19.502651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:103704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.071 [2024-07-14 20:25:19.502664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:49.071 [2024-07-14 20:25:19.502694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:103712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.071 [2024-07-14 20:25:19.502708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:49.071 [2024-07-14 20:25:19.502752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:103720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.071 [2024-07-14 20:25:19.502766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:49.071 [2024-07-14 20:25:19.502793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:103728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.071 [2024-07-14 20:25:19.502808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:49.071 [2024-07-14 20:25:19.503013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.071 [2024-07-14 20:25:19.503040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:49.071 [2024-07-14 20:25:19.503073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:103744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.071 [2024-07-14 20:25:19.503091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:49.071 [2024-07-14 20:25:19.503121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:103752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.071 [2024-07-14 20:25:19.503137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.071 [2024-07-14 20:25:19.503167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:103760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.071 [2024-07-14 20:25:19.503182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.071 [2024-07-14 20:25:19.503212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:103768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.071 [2024-07-14 20:25:19.503227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:49.071 [2024-07-14 20:25:19.503257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:103776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.071 [2024-07-14 20:25:19.503273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:49.071 [2024-07-14 20:25:19.503316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:103784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.071 [2024-07-14 20:25:19.503331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:49.071 [2024-07-14 20:25:19.503374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:103792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.071 [2024-07-14 20:25:19.503389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:49.071 [2024-07-14 20:25:35.201222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.071 [2024-07-14 20:25:35.201344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:49.071 [2024-07-14 20:25:35.201396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:11992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.071 [2024-07-14 20:25:35.201413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:49.071 [2024-07-14 20:25:35.201434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.071 [2024-07-14 20:25:35.201449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:49.071 [2024-07-14 20:25:35.201467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:12024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.071 [2024-07-14 20:25:35.201481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:49.071 [2024-07-14 20:25:35.201527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:12040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.071 [2024-07-14 20:25:35.201543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:49.071 [2024-07-14 20:25:35.201562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:12056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.071 [2024-07-14 20:25:35.201576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:49.071 [2024-07-14 20:25:35.201594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:12072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.071 [2024-07-14 20:25:35.201617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:49.071 [2024-07-14 20:25:35.201636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:12088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.071 [2024-07-14 20:25:35.201649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:49.071 [2024-07-14 20:25:35.201668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:12104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.071 [2024-07-14 20:25:35.201681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:49.071 [2024-07-14 20:25:35.201715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:12120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.071 [2024-07-14 20:25:35.201728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:49.071 [2024-07-14 20:25:35.201747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:12136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.071 [2024-07-14 20:25:35.201761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:49.071 [2024-07-14 20:25:35.201781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:12152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.071 [2024-07-14 20:25:35.201794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:49.071 [2024-07-14 20:25:35.201813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:11560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.071 [2024-07-14 20:25:35.201827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:49.071 [2024-07-14 20:25:35.201846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.071 [2024-07-14 20:25:35.201859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:49.071 [2024-07-14 20:25:35.201894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:11624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.071 [2024-07-14 20:25:35.201922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:49.071 [2024-07-14 20:25:35.201945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:12160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.071 [2024-07-14 20:25:35.201960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:49.071 [2024-07-14 20:25:35.201980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:12176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.071 [2024-07-14 20:25:35.202004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:49.071 [2024-07-14 20:25:35.202043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:12192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.071 [2024-07-14 20:25:35.202058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:49.071 [2024-07-14 20:25:35.202079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:12208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.071 [2024-07-14 20:25:35.202093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:49.071 [2024-07-14 20:25:35.202114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:12224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.071 [2024-07-14 20:25:35.202128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:49.071 [2024-07-14 20:25:35.202149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:12240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.071 [2024-07-14 20:25:35.202164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:49.071 [2024-07-14 20:25:35.202186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:11680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.071 [2024-07-14 20:25:35.202200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:49.071 [2024-07-14 20:25:35.202221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:11712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.071 [2024-07-14 20:25:35.202236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:49.071 [2024-07-14 20:25:35.202256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:11744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.071 [2024-07-14 20:25:35.202271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:49.071 [2024-07-14 20:25:35.202291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:12248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.072 [2024-07-14 20:25:35.202305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:49.072 [2024-07-14 20:25:35.202326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:12264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.072 [2024-07-14 20:25:35.202340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:49.072 [2024-07-14 20:25:35.202362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.072 [2024-07-14 20:25:35.202376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:49.072 [2024-07-14 20:25:35.202397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:12296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.072 [2024-07-14 20:25:35.202411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:49.072 [2024-07-14 20:25:35.202433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:11776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.072 [2024-07-14 20:25:35.202455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:49.072 [2024-07-14 20:25:35.202477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:11808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.072 [2024-07-14 20:25:35.202492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:49.072 [2024-07-14 20:25:35.202528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:11832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.072 [2024-07-14 20:25:35.202542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:49.072 [2024-07-14 20:25:35.202563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:11864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.072 [2024-07-14 20:25:35.202577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:49.072 [2024-07-14 20:25:35.203070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:12312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.072 [2024-07-14 20:25:35.203097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:49.072 [2024-07-14 20:25:35.203123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:12328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.072 [2024-07-14 20:25:35.203140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:49.072 [2024-07-14 20:25:35.203162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:11656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.072 [2024-07-14 20:25:35.203177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:49.072 [2024-07-14 20:25:35.203226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:11688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.072 [2024-07-14 20:25:35.203240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:49.072 [2024-07-14 20:25:35.203265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.072 [2024-07-14 20:25:35.203293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:49.072 [2024-07-14 20:25:35.203312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.072 [2024-07-14 20:25:35.203325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:49.072 [2024-07-14 20:25:35.203344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:11784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.072 [2024-07-14 20:25:35.203357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:49.072 [2024-07-14 20:25:35.203378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:12344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.072 [2024-07-14 20:25:35.203391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:49.072 [2024-07-14 20:25:35.203410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:12360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.072 [2024-07-14 20:25:35.203423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:49.072 [2024-07-14 20:25:35.203454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:12376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.072 [2024-07-14 20:25:35.203470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:49.072 [2024-07-14 20:25:35.203489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:12392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.072 [2024-07-14 20:25:35.203510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:49.072 [2024-07-14 20:25:35.203529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:11824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.072 [2024-07-14 20:25:35.203542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:49.072 [2024-07-14 20:25:35.203571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:11856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.072 [2024-07-14 20:25:35.203585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:49.072 [2024-07-14 20:25:35.203604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:11888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.072 [2024-07-14 20:25:35.203617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:49.072 [2024-07-14 20:25:35.203635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:11920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.072 [2024-07-14 20:25:35.203649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:49.072 [2024-07-14 20:25:35.203668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:12416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.072 [2024-07-14 20:25:35.203681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:49.072 [2024-07-14 20:25:35.203700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.072 [2024-07-14 20:25:35.203713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:49.072 [2024-07-14 20:25:35.203732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:11984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.072 [2024-07-14 20:25:35.203745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:49.072 [2024-07-14 20:25:35.203764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:11912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.072 [2024-07-14 20:25:35.203777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:49.072 [2024-07-14 20:25:35.203796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.072 [2024-07-14 20:25:35.203810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:49.072 [2024-07-14 20:25:35.203829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:11976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.072 [2024-07-14 20:25:35.203843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:49.072 [2024-07-14 20:25:35.205113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:12000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.072 [2024-07-14 20:25:35.205141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:49.072 [2024-07-14 20:25:35.205166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.072 [2024-07-14 20:25:35.205183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:49.072 [2024-07-14 20:25:35.205203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:12064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.072 [2024-07-14 20:25:35.205218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:49.072 [2024-07-14 20:25:35.205237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:12096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.072 [2024-07-14 20:25:35.205251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:49.072 [2024-07-14 20:25:35.205285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:12440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.072 [2024-07-14 20:25:35.205298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:49.072 [2024-07-14 20:25:35.205317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:12456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.072 [2024-07-14 20:25:35.205331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:49.072 [2024-07-14 20:25:35.205349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:12472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.072 [2024-07-14 20:25:35.205363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:49.072 [2024-07-14 20:25:35.205389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:12488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.072 [2024-07-14 20:25:35.205404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:49.072 [2024-07-14 20:25:35.205423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:12504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.072 [2024-07-14 20:25:35.205436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:49.072 [2024-07-14 20:25:35.205455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.072 [2024-07-14 20:25:35.205468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:49.072 [2024-07-14 20:25:35.205487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:11992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.072 [2024-07-14 20:25:35.205500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:49.072 [2024-07-14 20:25:35.205519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:12024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.072 [2024-07-14 20:25:35.205532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:49.072 [2024-07-14 20:25:35.205551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:12056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.072 [2024-07-14 20:25:35.205576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:49.072 [2024-07-14 20:25:35.205596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:12088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.073 [2024-07-14 20:25:35.205610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:49.073 [2024-07-14 20:25:35.205629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:12120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.073 [2024-07-14 20:25:35.205643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:49.073 [2024-07-14 20:25:35.205662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.073 [2024-07-14 20:25:35.205675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:49.073 [2024-07-14 20:25:35.205694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:11592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.073 [2024-07-14 20:25:35.205707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:49.073 [2024-07-14 20:25:35.205726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:12160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.073 [2024-07-14 20:25:35.205740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:49.073 [2024-07-14 20:25:35.205758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:12192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.073 [2024-07-14 20:25:35.205772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:49.073 [2024-07-14 20:25:35.205791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:12224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.073 [2024-07-14 20:25:35.205804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:49.073 [2024-07-14 20:25:35.205823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:11680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.073 [2024-07-14 20:25:35.205836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:49.073 [2024-07-14 20:25:35.205855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:11744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.073 [2024-07-14 20:25:35.205868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:49.073 [2024-07-14 20:25:35.205900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:12264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.073 [2024-07-14 20:25:35.205917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:49.073 [2024-07-14 20:25:35.205943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:12296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.073 [2024-07-14 20:25:35.205957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:49.073 [2024-07-14 20:25:35.205976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.073 [2024-07-14 20:25:35.206000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:49.073 [2024-07-14 20:25:35.206021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:11864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.073 [2024-07-14 20:25:35.206035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:49.073 [2024-07-14 20:25:35.206544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:12536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.073 [2024-07-14 20:25:35.206569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:49.073 [2024-07-14 20:25:35.206592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.073 [2024-07-14 20:25:35.206606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:49.073 [2024-07-14 20:25:35.206626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:12568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.073 [2024-07-14 20:25:35.206640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:49.073 [2024-07-14 20:25:35.206659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:12584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.073 [2024-07-14 20:25:35.206672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:49.073 [2024-07-14 20:25:35.206691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:12600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.073 [2024-07-14 20:25:35.206704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:49.073 [2024-07-14 20:25:35.206723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:12128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.073 [2024-07-14 20:25:35.206736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:49.073 [2024-07-14 20:25:35.206755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:12168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.073 [2024-07-14 20:25:35.206769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:49.073 [2024-07-14 20:25:35.206788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:12200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.073 [2024-07-14 20:25:35.206801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:49.073 [2024-07-14 20:25:35.206820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:12232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.073 [2024-07-14 20:25:35.206834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:49.073 [2024-07-14 20:25:35.206864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:12328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.073 [2024-07-14 20:25:35.206904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:49.073 [2024-07-14 20:25:35.206927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:11688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.073 [2024-07-14 20:25:35.206941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:49.073 [2024-07-14 20:25:35.206972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:11752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.073 [2024-07-14 20:25:35.206987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:49.073 [2024-07-14 20:25:35.207009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:12344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.073 [2024-07-14 20:25:35.207023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:49.073 [2024-07-14 20:25:35.207048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:12376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.073 [2024-07-14 20:25:35.207063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:49.073 [2024-07-14 20:25:35.207083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:11824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.073 [2024-07-14 20:25:35.207096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:49.073 [2024-07-14 20:25:35.207116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.073 [2024-07-14 20:25:35.207129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:49.073 [2024-07-14 20:25:35.207149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:12416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.073 [2024-07-14 20:25:35.207162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:49.073 [2024-07-14 20:25:35.207182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:11984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.073 [2024-07-14 20:25:35.207196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:49.073 [2024-07-14 20:25:35.207216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:11944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.073 [2024-07-14 20:25:35.207230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:49.073 [2024-07-14 20:25:35.208170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:12608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.073 [2024-07-14 20:25:35.208196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:49.073 [2024-07-14 20:25:35.208220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.073 [2024-07-14 20:25:35.208234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:49.073 [2024-07-14 20:25:35.208254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.073 [2024-07-14 20:25:35.208267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:49.073 [2024-07-14 20:25:35.208286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.073 [2024-07-14 20:25:35.208299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:49.073 [2024-07-14 20:25:35.208329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.073 [2024-07-14 20:25:35.208345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:49.073 [2024-07-14 20:25:35.208364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.073 [2024-07-14 20:25:35.208378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:49.073 [2024-07-14 20:25:35.208397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:12456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.073 [2024-07-14 20:25:35.208410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:49.073 [2024-07-14 20:25:35.208429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:12488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.073 [2024-07-14 20:25:35.208442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:49.073 [2024-07-14 20:25:35.208461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:12520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.073 [2024-07-14 20:25:35.208474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:49.073 [2024-07-14 20:25:35.208493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:12024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.073 [2024-07-14 20:25:35.208506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:49.073 [2024-07-14 20:25:35.208531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:12088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.073 [2024-07-14 20:25:35.208545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:49.073 [2024-07-14 20:25:35.208564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:12152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.073 [2024-07-14 20:25:35.208578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:49.073 [2024-07-14 20:25:35.208596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:12160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.073 [2024-07-14 20:25:35.208610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:49.073 [2024-07-14 20:25:35.208629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:12224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.073 [2024-07-14 20:25:35.208642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:49.073 [2024-07-14 20:25:35.208660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.073 [2024-07-14 20:25:35.208673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:49.073 [2024-07-14 20:25:35.208692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:12296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.073 [2024-07-14 20:25:35.208705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.074 [2024-07-14 20:25:35.208724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:11864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.074 [2024-07-14 20:25:35.208745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.074 [2024-07-14 20:25:35.208767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:12368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.074 [2024-07-14 20:25:35.208781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:49.074 [2024-07-14 20:25:35.208799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:12400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.074 [2024-07-14 20:25:35.208812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:49.074 [2024-07-14 20:25:35.208830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:12552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.074 [2024-07-14 20:25:35.208843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:49.074 [2024-07-14 20:25:35.208875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:12584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.074 [2024-07-14 20:25:35.208891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:49.074 [2024-07-14 20:25:35.208910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:12128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.074 [2024-07-14 20:25:35.208923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:49.074 [2024-07-14 20:25:35.208941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.074 [2024-07-14 20:25:35.208954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:49.074 [2024-07-14 20:25:35.208973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:12328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.074 [2024-07-14 20:25:35.208986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:49.074 [2024-07-14 20:25:35.209004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:11752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.074 [2024-07-14 20:25:35.209017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:49.074 [2024-07-14 20:25:35.209036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:12376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.074 [2024-07-14 20:25:35.209049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:49.074 [2024-07-14 20:25:35.209068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:11888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.074 [2024-07-14 20:25:35.209081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:49.074 [2024-07-14 20:25:35.209100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:11984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.074 [2024-07-14 20:25:35.209112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:49.074 [2024-07-14 20:25:35.211290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:12616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.074 [2024-07-14 20:25:35.211328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:49.074 [2024-07-14 20:25:35.211353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:12632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.074 [2024-07-14 20:25:35.211368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:49.074 [2024-07-14 20:25:35.211387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:12648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.074 [2024-07-14 20:25:35.211400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:49.074 [2024-07-14 20:25:35.211419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:12664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.074 [2024-07-14 20:25:35.211432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:49.074 [2024-07-14 20:25:35.211450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:12424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.074 [2024-07-14 20:25:35.211463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:49.074 [2024-07-14 20:25:35.211482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:12448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.074 [2024-07-14 20:25:35.211495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:49.074 [2024-07-14 20:25:35.211513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:12480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.074 [2024-07-14 20:25:35.211526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:49.074 [2024-07-14 20:25:35.211544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:12512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.074 [2024-07-14 20:25:35.211557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:49.074 [2024-07-14 20:25:35.211575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.074 [2024-07-14 20:25:35.211588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:49.074 [2024-07-14 20:25:35.211607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:12104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.074 [2024-07-14 20:25:35.211619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:49.074 [2024-07-14 20:25:35.211638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:12176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.074 [2024-07-14 20:25:35.211650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:49.074 [2024-07-14 20:25:35.211669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.074 [2024-07-14 20:25:35.211682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:49.074 [2024-07-14 20:25:35.211700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.074 [2024-07-14 20:25:35.211713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:49.074 [2024-07-14 20:25:35.211757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:12336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.074 [2024-07-14 20:25:35.211772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:49.074 [2024-07-14 20:25:35.211792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:12096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.074 [2024-07-14 20:25:35.211806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:49.074 [2024-07-14 20:25:35.211824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:12488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.074 [2024-07-14 20:25:35.211838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:49.074 [2024-07-14 20:25:35.211856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:12024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.074 [2024-07-14 20:25:35.211885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:49.074 [2024-07-14 20:25:35.211908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.074 [2024-07-14 20:25:35.211921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:49.074 [2024-07-14 20:25:35.211940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:12224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.074 [2024-07-14 20:25:35.211953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:49.074 [2024-07-14 20:25:35.211972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:12296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.074 [2024-07-14 20:25:35.211985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:49.074 [2024-07-14 20:25:35.212003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:12368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.074 [2024-07-14 20:25:35.212016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:49.074 [2024-07-14 20:25:35.212035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:12552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.074 [2024-07-14 20:25:35.212048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:49.074 [2024-07-14 20:25:35.212067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:12128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.074 [2024-07-14 20:25:35.212080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:49.074 [2024-07-14 20:25:35.212099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:12328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.074 [2024-07-14 20:25:35.212112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:49.074 [2024-07-14 20:25:35.212130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:12376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.074 [2024-07-14 20:25:35.212143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:49.074 [2024-07-14 20:25:35.212170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.074 [2024-07-14 20:25:35.212184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:49.074 [2024-07-14 20:25:35.212203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:12280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.074 [2024-07-14 20:25:35.212226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:49.074 [2024-07-14 20:25:35.212246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:12544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.074 [2024-07-14 20:25:35.212259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:49.074 [2024-07-14 20:25:35.212278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:12576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.074 [2024-07-14 20:25:35.212291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:49.074 [2024-07-14 20:25:35.212311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:12312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.074 [2024-07-14 20:25:35.212324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:49.074 [2024-07-14 20:25:35.212935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:12392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.074 [2024-07-14 20:25:35.212960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:49.074 [2024-07-14 20:25:35.212983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:12688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.074 [2024-07-14 20:25:35.212998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:49.074 [2024-07-14 20:25:35.213017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:12704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.074 [2024-07-14 20:25:35.213030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:49.074 [2024-07-14 20:25:35.213049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:12720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.074 [2024-07-14 20:25:35.213062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:49.074 [2024-07-14 20:25:35.213081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:12736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.074 [2024-07-14 20:25:35.213094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:49.074 [2024-07-14 20:25:35.213112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:12752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.074 [2024-07-14 20:25:35.213125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:49.074 [2024-07-14 20:25:35.213144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:12768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.074 [2024-07-14 20:25:35.213157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:49.074 [2024-07-14 20:25:35.213175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:12784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.074 [2024-07-14 20:25:35.213199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:49.074 [2024-07-14 20:25:35.213219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:12800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.074 [2024-07-14 20:25:35.213232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:49.074 [2024-07-14 20:25:35.213251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:12816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.074 [2024-07-14 20:25:35.213264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:49.075 [2024-07-14 20:25:35.213282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:12832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.075 [2024-07-14 20:25:35.213295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:49.075 [2024-07-14 20:25:35.213313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:12848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.075 [2024-07-14 20:25:35.213326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:49.075 [2024-07-14 20:25:35.213344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:12864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.075 [2024-07-14 20:25:35.213358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:49.075 [2024-07-14 20:25:35.213376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:12880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.075 [2024-07-14 20:25:35.213389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:49.075 [2024-07-14 20:25:35.213773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:12472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.075 [2024-07-14 20:25:35.213796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:49.075 [2024-07-14 20:25:35.213819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:11992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.075 [2024-07-14 20:25:35.213833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:49.075 [2024-07-14 20:25:35.213864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:12120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.075 [2024-07-14 20:25:35.213881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:49.075 [2024-07-14 20:25:35.213901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.075 [2024-07-14 20:25:35.213915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:49.075 [2024-07-14 20:25:35.213933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:12632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.075 [2024-07-14 20:25:35.213946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:49.075 [2024-07-14 20:25:35.213965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:12664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.075 [2024-07-14 20:25:35.213988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:49.075 [2024-07-14 20:25:35.214008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:12448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.075 [2024-07-14 20:25:35.214022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:49.075 [2024-07-14 20:25:35.214040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:12512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.075 [2024-07-14 20:25:35.214053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:49.075 [2024-07-14 20:25:35.214072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.075 [2024-07-14 20:25:35.214085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:49.075 [2024-07-14 20:25:35.214103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:12240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.075 [2024-07-14 20:25:35.214116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:49.075 [2024-07-14 20:25:35.214135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:12336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.075 [2024-07-14 20:25:35.214147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:49.075 [2024-07-14 20:25:35.214166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:12488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.075 [2024-07-14 20:25:35.214179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:49.075 [2024-07-14 20:25:35.214197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:12152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.075 [2024-07-14 20:25:35.214210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:49.075 [2024-07-14 20:25:35.214235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:12296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.075 [2024-07-14 20:25:35.214250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:49.075 [2024-07-14 20:25:35.214268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:12552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.075 [2024-07-14 20:25:35.214282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:49.075 [2024-07-14 20:25:35.214300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:12328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.075 [2024-07-14 20:25:35.214313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:49.075 [2024-07-14 20:25:35.214332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.075 [2024-07-14 20:25:35.214345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:49.075 [2024-07-14 20:25:35.214364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:12544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.075 [2024-07-14 20:25:35.214377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:49.075 [2024-07-14 20:25:35.214403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:12312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.075 [2024-07-14 20:25:35.214417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:49.075 [2024-07-14 20:25:35.214759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:12568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.075 [2024-07-14 20:25:35.214783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:49.075 [2024-07-14 20:25:35.214806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:12344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.075 [2024-07-14 20:25:35.214820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:49.075 [2024-07-14 20:25:35.214840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.075 [2024-07-14 20:25:35.214866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:49.075 [2024-07-14 20:25:35.214913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.075 [2024-07-14 20:25:35.214930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:49.075 [2024-07-14 20:25:35.214949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:12688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.075 [2024-07-14 20:25:35.214963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:49.075 [2024-07-14 20:25:35.214983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:12720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.075 [2024-07-14 20:25:35.214996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:49.075 [2024-07-14 20:25:35.215015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:12752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.075 [2024-07-14 20:25:35.215029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:49.075 [2024-07-14 20:25:35.215048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:12784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.075 [2024-07-14 20:25:35.215062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:49.075 [2024-07-14 20:25:35.215081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:12816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.075 [2024-07-14 20:25:35.215095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:49.075 [2024-07-14 20:25:35.215114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:12848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.075 [2024-07-14 20:25:35.215127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:49.075 [2024-07-14 20:25:35.215147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:12880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.075 [2024-07-14 20:25:35.215161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:49.075 [2024-07-14 20:25:35.216811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.075 [2024-07-14 20:25:35.216837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:49.075 [2024-07-14 20:25:35.216862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.075 [2024-07-14 20:25:35.216878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:49.075 [2024-07-14 20:25:35.216912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:11992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.075 [2024-07-14 20:25:35.216928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:49.075 [2024-07-14 20:25:35.216948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:12264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.075 [2024-07-14 20:25:35.216962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:49.075 [2024-07-14 20:25:35.216982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.075 [2024-07-14 20:25:35.216996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:49.075 [2024-07-14 20:25:35.217016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:12512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.075 [2024-07-14 20:25:35.217029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:49.075 [2024-07-14 20:25:35.217049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:12240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.075 [2024-07-14 20:25:35.217063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:49.075 [2024-07-14 20:25:35.217083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:12488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.075 [2024-07-14 20:25:35.217097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:49.075 [2024-07-14 20:25:35.217116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:12296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.075 [2024-07-14 20:25:35.217130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:49.075 [2024-07-14 20:25:35.217150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:12328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.075 [2024-07-14 20:25:35.217164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:49.075 [2024-07-14 20:25:35.217184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:12544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.075 [2024-07-14 20:25:35.217212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:49.075 [2024-07-14 20:25:35.217231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:12584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.075 [2024-07-14 20:25:35.217244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:49.075 [2024-07-14 20:25:35.217274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.075 [2024-07-14 20:25:35.217289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:49.075 [2024-07-14 20:25:35.217308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.075 [2024-07-14 20:25:35.217336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:49.075 [2024-07-14 20:25:35.217354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:12720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.075 [2024-07-14 20:25:35.217367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:49.075 [2024-07-14 20:25:35.217386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:12784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.075 [2024-07-14 20:25:35.217399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:49.075 [2024-07-14 20:25:35.217417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:12848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.075 [2024-07-14 20:25:35.217430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:49.075 [2024-07-14 20:25:35.219262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:12888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.075 [2024-07-14 20:25:35.219303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:49.075 [2024-07-14 20:25:35.219327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:12904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.076 [2024-07-14 20:25:35.219341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:49.076 [2024-07-14 20:25:35.219360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.076 [2024-07-14 20:25:35.219374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:49.076 [2024-07-14 20:25:35.219392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:12936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.076 [2024-07-14 20:25:35.219406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:49.076 [2024-07-14 20:25:35.219424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:12952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.076 [2024-07-14 20:25:35.219437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:49.076 [2024-07-14 20:25:35.219456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:12968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.076 [2024-07-14 20:25:35.219469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:49.076 [2024-07-14 20:25:35.219487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.076 [2024-07-14 20:25:35.219500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:49.076 [2024-07-14 20:25:35.219518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:13000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.076 [2024-07-14 20:25:35.219543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:49.076 [2024-07-14 20:25:35.219563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:13016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.076 [2024-07-14 20:25:35.219576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:49.076 [2024-07-14 20:25:35.219594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:13032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.076 [2024-07-14 20:25:35.219607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:49.076 [2024-07-14 20:25:35.219626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:13048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.076 [2024-07-14 20:25:35.219638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:49.076 [2024-07-14 20:25:35.219657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.076 [2024-07-14 20:25:35.219669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:49.076 [2024-07-14 20:25:35.219688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:13080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.076 [2024-07-14 20:25:35.219700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:49.076 [2024-07-14 20:25:35.219719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:13096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.076 [2024-07-14 20:25:35.219732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:49.076 [2024-07-14 20:25:35.219750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:12680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.076 [2024-07-14 20:25:35.219763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:49.076 [2024-07-14 20:25:35.219782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.076 [2024-07-14 20:25:35.219795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:49.076 [2024-07-14 20:25:35.219813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.076 [2024-07-14 20:25:35.219827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:49.076 [2024-07-14 20:25:35.219845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:12776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.076 [2024-07-14 20:25:35.219858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:49.076 [2024-07-14 20:25:35.219908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:12808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.076 [2024-07-14 20:25:35.219935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:49.076 [2024-07-14 20:25:35.219956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:12840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.076 [2024-07-14 20:25:35.219979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:49.076 [2024-07-14 20:25:35.220001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:12872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.076 [2024-07-14 20:25:35.220014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:49.076 [2024-07-14 20:25:35.220034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:12648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.076 [2024-07-14 20:25:35.220047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:49.076 [2024-07-14 20:25:35.220067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.076 [2024-07-14 20:25:35.220081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:49.076 [2024-07-14 20:25:35.220100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:12264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.076 [2024-07-14 20:25:35.220114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:49.076 [2024-07-14 20:25:35.220133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:12512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.076 [2024-07-14 20:25:35.220147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.076 [2024-07-14 20:25:35.220166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:12488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.076 [2024-07-14 20:25:35.220179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.076 [2024-07-14 20:25:35.220199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:12328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.076 [2024-07-14 20:25:35.220213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:49.076 [2024-07-14 20:25:35.220247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:12584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.076 [2024-07-14 20:25:35.220274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:49.076 [2024-07-14 20:25:35.220293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:12656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.076 [2024-07-14 20:25:35.220305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:49.076 [2024-07-14 20:25:35.220324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:12784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.076 [2024-07-14 20:25:35.220336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:49.076 [2024-07-14 20:25:35.220355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:12024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.076 [2024-07-14 20:25:35.220367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:49.076 [2024-07-14 20:25:35.220386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:12376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.076 [2024-07-14 20:25:35.220399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:49.076 [2024-07-14 20:25:35.221252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:13120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.076 [2024-07-14 20:25:35.221292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:49.076 [2024-07-14 20:25:35.221315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:13136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.076 [2024-07-14 20:25:35.221329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:49.076 [2024-07-14 20:25:35.221348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.076 [2024-07-14 20:25:35.221361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:49.076 [2024-07-14 20:25:35.221379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:13168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.076 [2024-07-14 20:25:35.221392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:49.076 [2024-07-14 20:25:35.221410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:13184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.076 [2024-07-14 20:25:35.221423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:49.076 [2024-07-14 20:25:35.221442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:13200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.076 [2024-07-14 20:25:35.221454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:49.076 [2024-07-14 20:25:35.221473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:13216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.076 [2024-07-14 20:25:35.221485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:49.076 [2024-07-14 20:25:35.221504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:13232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.076 [2024-07-14 20:25:35.221516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:49.076 [2024-07-14 20:25:35.221534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:13248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.076 [2024-07-14 20:25:35.221547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:49.076 [2024-07-14 20:25:35.221565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:13264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.076 [2024-07-14 20:25:35.221578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:49.076 [2024-07-14 20:25:35.221596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:12736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.076 [2024-07-14 20:25:35.221609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:49.076 [2024-07-14 20:25:35.221627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:12800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.076 [2024-07-14 20:25:35.221640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:49.076 [2024-07-14 20:25:35.221684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:12864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.076 [2024-07-14 20:25:35.221699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:49.076 [2024-07-14 20:25:35.223634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:12152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.077 [2024-07-14 20:25:35.223660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:49.077 [2024-07-14 20:25:35.223685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:12904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.077 [2024-07-14 20:25:35.223700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:49.077 [2024-07-14 20:25:35.223720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:12936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.077 [2024-07-14 20:25:35.223734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:49.077 [2024-07-14 20:25:35.223753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:12968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.077 [2024-07-14 20:25:35.223766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:49.077 [2024-07-14 20:25:35.223785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:13000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.077 [2024-07-14 20:25:35.223798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:49.077 [2024-07-14 20:25:35.223817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:13032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.077 [2024-07-14 20:25:35.223831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:49.077 [2024-07-14 20:25:35.223849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:13064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.077 [2024-07-14 20:25:35.223862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:49.077 [2024-07-14 20:25:35.223912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:13096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.077 [2024-07-14 20:25:35.223928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:49.077 [2024-07-14 20:25:35.223947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:12712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.077 [2024-07-14 20:25:35.223961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:49.077 [2024-07-14 20:25:35.223980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:12776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.077 [2024-07-14 20:25:35.223994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:49.077 [2024-07-14 20:25:35.224014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.077 [2024-07-14 20:25:35.224027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:49.077 [2024-07-14 20:25:35.224047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:12648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.077 [2024-07-14 20:25:35.224072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:49.077 [2024-07-14 20:25:35.224093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:12264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.077 [2024-07-14 20:25:35.224107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:49.077 [2024-07-14 20:25:35.224127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:12488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.077 [2024-07-14 20:25:35.224141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:49.077 [2024-07-14 20:25:35.224161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:12584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.077 [2024-07-14 20:25:35.224175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:49.077 [2024-07-14 20:25:35.224194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:12784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.077 [2024-07-14 20:25:35.224208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:49.077 [2024-07-14 20:25:35.224228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.077 [2024-07-14 20:25:35.224257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:49.077 [2024-07-14 20:25:35.224275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.077 [2024-07-14 20:25:35.224289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:49.077 [2024-07-14 20:25:35.224308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:12880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.077 [2024-07-14 20:25:35.224321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:49.077 [2024-07-14 20:25:35.224340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:12912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.077 [2024-07-14 20:25:35.224353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:49.077 [2024-07-14 20:25:35.224379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:12944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.077 [2024-07-14 20:25:35.224393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:49.077 [2024-07-14 20:25:35.224427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.077 [2024-07-14 20:25:35.224457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:49.077 [2024-07-14 20:25:35.224478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:13008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.077 [2024-07-14 20:25:35.224491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:49.077 [2024-07-14 20:25:35.224512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:13040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.077 [2024-07-14 20:25:35.224534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:49.077 [2024-07-14 20:25:35.224555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:13072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.077 [2024-07-14 20:25:35.224570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:49.077 [2024-07-14 20:25:35.224591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:13104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.077 [2024-07-14 20:25:35.224605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:49.077 [2024-07-14 20:25:35.224625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:13136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.077 [2024-07-14 20:25:35.224639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:49.077 [2024-07-14 20:25:35.224660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:13168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.077 [2024-07-14 20:25:35.224674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:49.077 [2024-07-14 20:25:35.224694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:13200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.077 [2024-07-14 20:25:35.224708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:49.077 [2024-07-14 20:25:35.224729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:13232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.077 [2024-07-14 20:25:35.224743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:49.077 [2024-07-14 20:25:35.224763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:13264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.077 [2024-07-14 20:25:35.224778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:49.077 [2024-07-14 20:25:35.224799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.077 [2024-07-14 20:25:35.224812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:49.077 [2024-07-14 20:25:35.226405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.077 [2024-07-14 20:25:35.226433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:49.077 [2024-07-14 20:25:35.226459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:13280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.077 [2024-07-14 20:25:35.226475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:49.077 [2024-07-14 20:25:35.226496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:13296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.077 [2024-07-14 20:25:35.226510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:49.077 [2024-07-14 20:25:35.226531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:13312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.077 [2024-07-14 20:25:35.226545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:49.077 [2024-07-14 20:25:35.226579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.077 [2024-07-14 20:25:35.226594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:49.077 [2024-07-14 20:25:35.226615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:13344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.077 [2024-07-14 20:25:35.226629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:49.077 [2024-07-14 20:25:35.226661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:13360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.077 [2024-07-14 20:25:35.226675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:49.077 [2024-07-14 20:25:35.226694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.077 [2024-07-14 20:25:35.226708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:49.077 [2024-07-14 20:25:35.226742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:13392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.077 [2024-07-14 20:25:35.226755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:49.077 [2024-07-14 20:25:35.226774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:13408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.077 [2024-07-14 20:25:35.226787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:49.077 [2024-07-14 20:25:35.226806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:13424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.077 [2024-07-14 20:25:35.226819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:49.077 [2024-07-14 20:25:35.226838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.077 [2024-07-14 20:25:35.226851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:49.077 [2024-07-14 20:25:35.226881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:13456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.077 [2024-07-14 20:25:35.226937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:49.077 [2024-07-14 20:25:35.226959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:13472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.077 [2024-07-14 20:25:35.226974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:49.077 [2024-07-14 20:25:35.226995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:13488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.077 [2024-07-14 20:25:35.227009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:49.077 [2024-07-14 20:25:35.227031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:13504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.077 [2024-07-14 20:25:35.227045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:49.077 [2024-07-14 20:25:35.227491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:13520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.077 [2024-07-14 20:25:35.227515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:49.077 [2024-07-14 20:25:35.227538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:12720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.077 [2024-07-14 20:25:35.227552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:49.077 [2024-07-14 20:25:35.227570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:13112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.077 [2024-07-14 20:25:35.227583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:49.077 [2024-07-14 20:25:35.227602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:13144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.077 [2024-07-14 20:25:35.227615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:49.077 [2024-07-14 20:25:35.227634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:13176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.078 [2024-07-14 20:25:35.227646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:49.078 [2024-07-14 20:25:35.227665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:13208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.078 [2024-07-14 20:25:35.227678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:49.078 [2024-07-14 20:25:35.227696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.078 [2024-07-14 20:25:35.227709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:49.078 [2024-07-14 20:25:35.227727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.078 [2024-07-14 20:25:35.227740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:49.078 [2024-07-14 20:25:35.227758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:12968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.078 [2024-07-14 20:25:35.227771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:49.078 [2024-07-14 20:25:35.227790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:13032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.078 [2024-07-14 20:25:35.227803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:49.078 [2024-07-14 20:25:35.227821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:13096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.078 [2024-07-14 20:25:35.227834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:49.078 [2024-07-14 20:25:35.227852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:12776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.078 [2024-07-14 20:25:35.227881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:49.078 [2024-07-14 20:25:35.227900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:12648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.078 [2024-07-14 20:25:35.227938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:49.078 [2024-07-14 20:25:35.227960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:12488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.078 [2024-07-14 20:25:35.227975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:49.078 [2024-07-14 20:25:35.228001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:12784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.078 [2024-07-14 20:25:35.228015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:49.078 [2024-07-14 20:25:35.228034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:12752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.078 [2024-07-14 20:25:35.228047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:49.078 [2024-07-14 20:25:35.228066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:12912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.078 [2024-07-14 20:25:35.228080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:49.078 [2024-07-14 20:25:35.228099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.078 [2024-07-14 20:25:35.228112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:49.078 [2024-07-14 20:25:35.228131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:13040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.078 [2024-07-14 20:25:35.228144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:49.078 [2024-07-14 20:25:35.228163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:13104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.078 [2024-07-14 20:25:35.228177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:49.078 [2024-07-14 20:25:35.228196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:13168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.078 [2024-07-14 20:25:35.228210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:49.078 [2024-07-14 20:25:35.229246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.078 [2024-07-14 20:25:35.229270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:49.078 [2024-07-14 20:25:35.229294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.078 [2024-07-14 20:25:35.229309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:49.078 [2024-07-14 20:25:35.229328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:13536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.078 [2024-07-14 20:25:35.229341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:49.078 [2024-07-14 20:25:35.229360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:13552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.078 [2024-07-14 20:25:35.229383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:49.078 [2024-07-14 20:25:35.229404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:13568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.078 [2024-07-14 20:25:35.229417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:49.078 [2024-07-14 20:25:35.229435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:13584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.078 [2024-07-14 20:25:35.229448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:49.078 [2024-07-14 20:25:35.229466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:13600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.078 [2024-07-14 20:25:35.229480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:49.078 [2024-07-14 20:25:35.229498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.078 [2024-07-14 20:25:35.229511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:49.078 [2024-07-14 20:25:35.229529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:12984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.078 [2024-07-14 20:25:35.229542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:49.078 [2024-07-14 20:25:35.229561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:13048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.078 [2024-07-14 20:25:35.229574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:49.078 [2024-07-14 20:25:35.229593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.078 [2024-07-14 20:25:35.229605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:49.078 [2024-07-14 20:25:35.229624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:13616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.078 [2024-07-14 20:25:35.229637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:49.078 [2024-07-14 20:25:35.229655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:13632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.078 [2024-07-14 20:25:35.229668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:49.078 [2024-07-14 20:25:35.229686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:13648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.078 [2024-07-14 20:25:35.229699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:49.078 [2024-07-14 20:25:35.229717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:13280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.078 [2024-07-14 20:25:35.229730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:49.078 [2024-07-14 20:25:35.229749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:13312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.078 [2024-07-14 20:25:35.229762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:49.078 [2024-07-14 20:25:35.229787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:13344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.078 [2024-07-14 20:25:35.229801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:49.078 [2024-07-14 20:25:35.229819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:13376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.078 [2024-07-14 20:25:35.229832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:49.078 [2024-07-14 20:25:35.229850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.078 [2024-07-14 20:25:35.229879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:49.078 [2024-07-14 20:25:35.229927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:13440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.078 [2024-07-14 20:25:35.229942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:49.078 [2024-07-14 20:25:35.229962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:13472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.078 [2024-07-14 20:25:35.229976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:49.078 [2024-07-14 20:25:35.229995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:13504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.078 [2024-07-14 20:25:35.230009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:49.078 [2024-07-14 20:25:35.230029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.078 [2024-07-14 20:25:35.230042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:49.078 [2024-07-14 20:25:35.230062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.078 [2024-07-14 20:25:35.230076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:49.078 [2024-07-14 20:25:35.230096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:12720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.078 [2024-07-14 20:25:35.230110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:49.078 [2024-07-14 20:25:35.230129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:13144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.078 [2024-07-14 20:25:35.230143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:49.078 [2024-07-14 20:25:35.230162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:13208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.078 [2024-07-14 20:25:35.230176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:49.078 [2024-07-14 20:25:35.230196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.078 [2024-07-14 20:25:35.230211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:49.078 [2024-07-14 20:25:35.230248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:13032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.078 [2024-07-14 20:25:35.230266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:49.078 [2024-07-14 20:25:35.230300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:12776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.078 [2024-07-14 20:25:35.230314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:49.078 [2024-07-14 20:25:35.230332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:12488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.078 [2024-07-14 20:25:35.230345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:49.078 [2024-07-14 20:25:35.230363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:12752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.078 [2024-07-14 20:25:35.230376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:49.078 [2024-07-14 20:25:35.230395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:12976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.078 [2024-07-14 20:25:35.230408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:49.078 [2024-07-14 20:25:35.230427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:13104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.078 [2024-07-14 20:25:35.230440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:49.078 [2024-07-14 20:25:35.233291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:13656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.078 [2024-07-14 20:25:35.233318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:49.078 [2024-07-14 20:25:35.233342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:13672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.078 [2024-07-14 20:25:35.233357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:49.079 [2024-07-14 20:25:35.233376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:13688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.079 [2024-07-14 20:25:35.233389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:49.079 [2024-07-14 20:25:35.233407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:13704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.079 [2024-07-14 20:25:35.233420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:49.079 [2024-07-14 20:25:35.233439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:13720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.079 [2024-07-14 20:25:35.233451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.079 [2024-07-14 20:25:35.233470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:13272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.079 [2024-07-14 20:25:35.233483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.079 [2024-07-14 20:25:35.233501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:13304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.079 [2024-07-14 20:25:35.233526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:49.079 [2024-07-14 20:25:35.233546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:13336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.079 [2024-07-14 20:25:35.233559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:49.079 [2024-07-14 20:25:35.233578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:13368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.079 [2024-07-14 20:25:35.233591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:49.079 [2024-07-14 20:25:35.233609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:13400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.079 [2024-07-14 20:25:35.233622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:49.079 [2024-07-14 20:25:35.233640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:13432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.079 [2024-07-14 20:25:35.233653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:49.079 [2024-07-14 20:25:35.233671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:13464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.079 [2024-07-14 20:25:35.233684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:49.079 [2024-07-14 20:25:35.233702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.079 [2024-07-14 20:25:35.233715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:49.079 [2024-07-14 20:25:35.233733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:12800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.079 [2024-07-14 20:25:35.233746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:49.079 [2024-07-14 20:25:35.233764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:13552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.079 [2024-07-14 20:25:35.233777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:49.079 [2024-07-14 20:25:35.233795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:13584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.079 [2024-07-14 20:25:35.233808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:49.079 [2024-07-14 20:25:35.233826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:12920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.079 [2024-07-14 20:25:35.233839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:49.079 [2024-07-14 20:25:35.233857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:13048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.079 [2024-07-14 20:25:35.233915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:49.079 [2024-07-14 20:25:35.233937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:13616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.079 [2024-07-14 20:25:35.233959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:49.079 [2024-07-14 20:25:35.233980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.079 [2024-07-14 20:25:35.233994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:49.079 [2024-07-14 20:25:35.234014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:13312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.079 [2024-07-14 20:25:35.234027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:49.079 [2024-07-14 20:25:35.234048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:13376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.079 [2024-07-14 20:25:35.234062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:49.079 [2024-07-14 20:25:35.234081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:13440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.079 [2024-07-14 20:25:35.234095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:49.079 [2024-07-14 20:25:35.234114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:13504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.079 [2024-07-14 20:25:35.234129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:49.079 [2024-07-14 20:25:35.234148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.079 [2024-07-14 20:25:35.234161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:49.079 [2024-07-14 20:25:35.234181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:13144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.079 [2024-07-14 20:25:35.234195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:49.079 [2024-07-14 20:25:35.234214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:12904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.079 [2024-07-14 20:25:35.234243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:49.079 [2024-07-14 20:25:35.234262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.079 [2024-07-14 20:25:35.234275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:49.079 [2024-07-14 20:25:35.234308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.079 [2024-07-14 20:25:35.234321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:49.079 [2024-07-14 20:25:35.234339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:13104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.079 [2024-07-14 20:25:35.234352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:49.079 [2024-07-14 20:25:35.234371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:13000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.079 [2024-07-14 20:25:35.234384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:49.079 [2024-07-14 20:25:35.235381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:13136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.079 [2024-07-14 20:25:35.235408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:49.079 [2024-07-14 20:25:35.235431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:13264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.079 [2024-07-14 20:25:35.235446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:49.079 [2024-07-14 20:25:35.235465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:13744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.079 [2024-07-14 20:25:35.235478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:49.079 [2024-07-14 20:25:35.235497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:13760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.079 [2024-07-14 20:25:35.235510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:49.079 [2024-07-14 20:25:35.235528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:13776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.079 [2024-07-14 20:25:35.235541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:49.079 [2024-07-14 20:25:35.235560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:13792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.079 [2024-07-14 20:25:35.235573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:49.079 [2024-07-14 20:25:35.235591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:13808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.079 [2024-07-14 20:25:35.235604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:49.079 [2024-07-14 20:25:35.235623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:13528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.079 [2024-07-14 20:25:35.235637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:49.079 [2024-07-14 20:25:35.235655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:13560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.079 [2024-07-14 20:25:35.235669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:49.079 [2024-07-14 20:25:35.235687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:13592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.079 [2024-07-14 20:25:35.235700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:49.079 [2024-07-14 20:25:35.235718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:13624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.079 [2024-07-14 20:25:35.235731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:49.079 [2024-07-14 20:25:35.235749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:13832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.079 [2024-07-14 20:25:35.235762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:49.079 [2024-07-14 20:25:35.235792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:13848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.079 [2024-07-14 20:25:35.235807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:49.079 [2024-07-14 20:25:35.235825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:13864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.079 [2024-07-14 20:25:35.235838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:49.079 [2024-07-14 20:25:35.235857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:13880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.079 [2024-07-14 20:25:35.235885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:49.079 [2024-07-14 20:25:35.235933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.080 [2024-07-14 20:25:35.235949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:49.080 [2024-07-14 20:25:35.235969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:13912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.080 [2024-07-14 20:25:35.235983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:49.080 [2024-07-14 20:25:35.236002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:13928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.080 [2024-07-14 20:25:35.236016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:49.080 [2024-07-14 20:25:35.236036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:13944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.080 [2024-07-14 20:25:35.236049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:49.080 [2024-07-14 20:25:35.236069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:13640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.080 [2024-07-14 20:25:35.236082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:49.080 [2024-07-14 20:25:35.236102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.080 [2024-07-14 20:25:35.236116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:49.080 [2024-07-14 20:25:35.236135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:13392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.080 [2024-07-14 20:25:35.236149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:49.080 [2024-07-14 20:25:35.236169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.080 [2024-07-14 20:25:35.236183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:49.080 [2024-07-14 20:25:35.237273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:13520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.080 [2024-07-14 20:25:35.237299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:49.080 [2024-07-14 20:25:35.237322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:13096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.080 [2024-07-14 20:25:35.237348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:49.080 [2024-07-14 20:25:35.237368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:13168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.080 [2024-07-14 20:25:35.237382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:49.080 [2024-07-14 20:25:35.237401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:13968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.080 [2024-07-14 20:25:35.237414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:49.080 [2024-07-14 20:25:35.237432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:13984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.080 [2024-07-14 20:25:35.237446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:49.080 [2024-07-14 20:25:35.237464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:13680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.080 [2024-07-14 20:25:35.237477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:49.080 [2024-07-14 20:25:35.237495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:13712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.080 [2024-07-14 20:25:35.237508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:49.080 [2024-07-14 20:25:35.237526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:13672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.080 [2024-07-14 20:25:35.237540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:49.080 [2024-07-14 20:25:35.237558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:13704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.080 [2024-07-14 20:25:35.237571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:49.080 [2024-07-14 20:25:35.237589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.080 [2024-07-14 20:25:35.237602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:49.080 [2024-07-14 20:25:35.237620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:13336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.080 [2024-07-14 20:25:35.237633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:49.080 [2024-07-14 20:25:35.237651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:13400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.080 [2024-07-14 20:25:35.237664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:49.080 [2024-07-14 20:25:35.237682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:13464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.080 [2024-07-14 20:25:35.237695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:49.080 [2024-07-14 20:25:35.237713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.080 [2024-07-14 20:25:35.237734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:49.080 [2024-07-14 20:25:35.237753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:13584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.080 [2024-07-14 20:25:35.237767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:49.080 [2024-07-14 20:25:35.237786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:13048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.080 [2024-07-14 20:25:35.237799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:49.080 [2024-07-14 20:25:35.237817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:13648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.080 [2024-07-14 20:25:35.237830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:49.080 [2024-07-14 20:25:35.237848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:13376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.080 [2024-07-14 20:25:35.237876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:49.080 [2024-07-14 20:25:35.237925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.080 [2024-07-14 20:25:35.237941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:49.080 [2024-07-14 20:25:35.237960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:13144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.080 [2024-07-14 20:25:35.237974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:49.080 [2024-07-14 20:25:35.237993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:12776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.080 [2024-07-14 20:25:35.238007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:49.080 [2024-07-14 20:25:35.238027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:13104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.080 [2024-07-14 20:25:35.238040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:49.080 [2024-07-14 20:25:35.238059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:13232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.080 [2024-07-14 20:25:35.238073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:49.080 [2024-07-14 20:25:35.238092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:13992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.080 [2024-07-14 20:25:35.238106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:49.080 [2024-07-14 20:25:35.238126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:14008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.080 [2024-07-14 20:25:35.238139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:49.080 [2024-07-14 20:25:35.238159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.080 [2024-07-14 20:25:35.238172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:49.080 [2024-07-14 20:25:35.238203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:14040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.080 [2024-07-14 20:25:35.238218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:49.080 [2024-07-14 20:25:35.238238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:14056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.080 [2024-07-14 20:25:35.238266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:49.080 [2024-07-14 20:25:35.238285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:13568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.080 [2024-07-14 20:25:35.238313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:49.080 [2024-07-14 20:25:35.238332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.080 [2024-07-14 20:25:35.238344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:49.080 [2024-07-14 20:25:35.238363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:13760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.080 [2024-07-14 20:25:35.238376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:49.080 [2024-07-14 20:25:35.238394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:13792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.080 [2024-07-14 20:25:35.238407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:49.080 [2024-07-14 20:25:35.238425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:13528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.080 [2024-07-14 20:25:35.238438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:49.080 [2024-07-14 20:25:35.238457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:13592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.080 [2024-07-14 20:25:35.238469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:49.080 [2024-07-14 20:25:35.239486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:13832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.080 [2024-07-14 20:25:35.239512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:49.080 [2024-07-14 20:25:35.239536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:13864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.080 [2024-07-14 20:25:35.239551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:49.081 [2024-07-14 20:25:35.239570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.081 [2024-07-14 20:25:35.239584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:49.081 [2024-07-14 20:25:35.239603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:13928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.081 [2024-07-14 20:25:35.239617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:49.081 [2024-07-14 20:25:35.239648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:13640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.081 [2024-07-14 20:25:35.239663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:49.081 [2024-07-14 20:25:35.239682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:13392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.081 [2024-07-14 20:25:35.239695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:49.081 [2024-07-14 20:25:35.240080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:13632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.081 [2024-07-14 20:25:35.240104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:49.081 [2024-07-14 20:25:35.240128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:13344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.081 [2024-07-14 20:25:35.240143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:49.081 [2024-07-14 20:25:35.240163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:13472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.081 [2024-07-14 20:25:35.240177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:49.081 [2024-07-14 20:25:35.240196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.081 [2024-07-14 20:25:35.240210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:49.081 [2024-07-14 20:25:35.240243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:14080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.081 [2024-07-14 20:25:35.240256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:49.081 [2024-07-14 20:25:35.240274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:14096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.081 [2024-07-14 20:25:35.240287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:49.081 [2024-07-14 20:25:35.240306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:14112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.081 [2024-07-14 20:25:35.240319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:49.081 [2024-07-14 20:25:35.240337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:14128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.081 [2024-07-14 20:25:35.240350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:49.081 [2024-07-14 20:25:35.240369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:14144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.081 [2024-07-14 20:25:35.240382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:49.081 [2024-07-14 20:25:35.240400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:14160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.081 [2024-07-14 20:25:35.240413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:49.081 [2024-07-14 20:25:35.240431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:13096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.081 [2024-07-14 20:25:35.240454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:49.081 [2024-07-14 20:25:35.240475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:13968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.081 [2024-07-14 20:25:35.240488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:49.081 [2024-07-14 20:25:35.240507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.081 [2024-07-14 20:25:35.240520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:49.081 [2024-07-14 20:25:35.241088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:13672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.081 [2024-07-14 20:25:35.241113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:49.081 [2024-07-14 20:25:35.241138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.081 [2024-07-14 20:25:35.241153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:49.081 [2024-07-14 20:25:35.241172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:13400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.081 [2024-07-14 20:25:35.241186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:49.081 [2024-07-14 20:25:35.241205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:12800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.081 [2024-07-14 20:25:35.241219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:49.081 [2024-07-14 20:25:35.241253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:13048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.081 [2024-07-14 20:25:35.241266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:49.081 [2024-07-14 20:25:35.241284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.081 [2024-07-14 20:25:35.241298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:49.081 [2024-07-14 20:25:35.241316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:13144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.081 [2024-07-14 20:25:35.241329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:49.081 [2024-07-14 20:25:35.241348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:13104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.081 [2024-07-14 20:25:35.241360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:49.081 [2024-07-14 20:25:35.241378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:13992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.081 [2024-07-14 20:25:35.241391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:49.081 [2024-07-14 20:25:35.241410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:14024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.081 [2024-07-14 20:25:35.241435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:49.081 [2024-07-14 20:25:35.241454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:14056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.081 [2024-07-14 20:25:35.241468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:49.081 [2024-07-14 20:25:35.241486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:13264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.081 [2024-07-14 20:25:35.241499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:49.081 [2024-07-14 20:25:35.241517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.081 [2024-07-14 20:25:35.241530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:49.081 [2024-07-14 20:25:35.241548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:13592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.081 [2024-07-14 20:25:35.241561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:49.081 [2024-07-14 20:25:35.241579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:13752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.081 [2024-07-14 20:25:35.241592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:49.081 [2024-07-14 20:25:35.241611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:13784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.081 [2024-07-14 20:25:35.241623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:49.081 [2024-07-14 20:25:35.241642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.081 [2024-07-14 20:25:35.241655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:49.081 [2024-07-14 20:25:35.241673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.081 [2024-07-14 20:25:35.241686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:49.081 [2024-07-14 20:25:35.241704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:13872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.081 [2024-07-14 20:25:35.241717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:49.081 [2024-07-14 20:25:35.241736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:13904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.081 [2024-07-14 20:25:35.241749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:49.081 [2024-07-14 20:25:35.241767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:13936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.081 [2024-07-14 20:25:35.241780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:49.081 [2024-07-14 20:25:35.241798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.081 [2024-07-14 20:25:35.241817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:49.081 [2024-07-14 20:25:35.241837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:14192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.081 [2024-07-14 20:25:35.241851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:49.081 [2024-07-14 20:25:35.241911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:13864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.081 [2024-07-14 20:25:35.241927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:49.081 [2024-07-14 20:25:35.241948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:13928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.081 [2024-07-14 20:25:35.241961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:49.081 [2024-07-14 20:25:35.241981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.081 [2024-07-14 20:25:35.241995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.082 [2024-07-14 20:25:35.242614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.082 [2024-07-14 20:25:35.242638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.082 [2024-07-14 20:25:35.242662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:13688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.082 [2024-07-14 20:25:35.242677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:49.082 [2024-07-14 20:25:35.242696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.082 [2024-07-14 20:25:35.242709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:49.082 [2024-07-14 20:25:35.242728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.082 [2024-07-14 20:25:35.242741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:49.082 [2024-07-14 20:25:35.242759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:14096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.082 [2024-07-14 20:25:35.242772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:49.082 [2024-07-14 20:25:35.242791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:14128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.082 [2024-07-14 20:25:35.242804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:49.082 [2024-07-14 20:25:35.242823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:14160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.082 [2024-07-14 20:25:35.242836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:49.082 [2024-07-14 20:25:35.242854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:13968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.082 [2024-07-14 20:25:35.242883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:49.082 [2024-07-14 20:25:35.243535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:13552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.082 [2024-07-14 20:25:35.243559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:49.082 [2024-07-14 20:25:35.243581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:13312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.082 [2024-07-14 20:25:35.243597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:49.082 [2024-07-14 20:25:35.243616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:12904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.082 [2024-07-14 20:25:35.243630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:49.082 [2024-07-14 20:25:35.243649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:14016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.082 [2024-07-14 20:25:35.243662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:49.082 [2024-07-14 20:25:35.243681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:14048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.082 [2024-07-14 20:25:35.243694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:49.082 [2024-07-14 20:25:35.243713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:13744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.082 [2024-07-14 20:25:35.243726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:49.082 [2024-07-14 20:25:35.243744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:13808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.082 [2024-07-14 20:25:35.243757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:49.082 [2024-07-14 20:25:35.243775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.082 [2024-07-14 20:25:35.243788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:49.082 [2024-07-14 20:25:35.243806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:12800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.082 [2024-07-14 20:25:35.243819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:49.082 [2024-07-14 20:25:35.243837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:13376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.082 [2024-07-14 20:25:35.243850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:49.082 [2024-07-14 20:25:35.243868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:13104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.082 [2024-07-14 20:25:35.243897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:49.082 [2024-07-14 20:25:35.243930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.082 [2024-07-14 20:25:35.243944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:49.082 [2024-07-14 20:25:35.243974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:13264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.082 [2024-07-14 20:25:35.243989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:49.082 [2024-07-14 20:25:35.244008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:13592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.082 [2024-07-14 20:25:35.244022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:49.082 [2024-07-14 20:25:35.244041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:13784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.082 [2024-07-14 20:25:35.244054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:49.082 [2024-07-14 20:25:35.244073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.082 [2024-07-14 20:25:35.244086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:49.082 [2024-07-14 20:25:35.244106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.082 [2024-07-14 20:25:35.244119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:49.082 [2024-07-14 20:25:35.244138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:14176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.082 [2024-07-14 20:25:35.244152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:49.082 [2024-07-14 20:25:35.244171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:13864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.082 [2024-07-14 20:25:35.244184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:49.082 [2024-07-14 20:25:35.244203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:13392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.082 [2024-07-14 20:25:35.244216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:49.082 [2024-07-14 20:25:35.244250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.082 [2024-07-14 20:25:35.244263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:49.082 [2024-07-14 20:25:35.244281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:13944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.082 [2024-07-14 20:25:35.244294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:49.082 [2024-07-14 20:25:35.244312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.082 [2024-07-14 20:25:35.244325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:49.082 [2024-07-14 20:25:35.244343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:14120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.082 [2024-07-14 20:25:35.244356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:49.082 [2024-07-14 20:25:35.244374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:14152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.082 [2024-07-14 20:25:35.244394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:49.082 [2024-07-14 20:25:35.244413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.082 [2024-07-14 20:25:35.244427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:49.082 [2024-07-14 20:25:35.244445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:12488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.082 [2024-07-14 20:25:35.244458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:49.082 [2024-07-14 20:25:35.244476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:14128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.082 [2024-07-14 20:25:35.244489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:49.082 [2024-07-14 20:25:35.244508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:13968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.082 [2024-07-14 20:25:35.244521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:49.082 [2024-07-14 20:25:35.246518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:13704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.082 [2024-07-14 20:25:35.246543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:49.082 [2024-07-14 20:25:35.246567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.082 [2024-07-14 20:25:35.246582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:49.082 [2024-07-14 20:25:35.246600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:14008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.082 [2024-07-14 20:25:35.246613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:49.082 [2024-07-14 20:25:35.246632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.082 [2024-07-14 20:25:35.246645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:49.082 [2024-07-14 20:25:35.246663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:13832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.082 [2024-07-14 20:25:35.246676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:49.082 [2024-07-14 20:25:35.246694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:13312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.082 [2024-07-14 20:25:35.246707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:49.083 [2024-07-14 20:25:35.246725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:14016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.083 [2024-07-14 20:25:35.246739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:49.083 [2024-07-14 20:25:35.246757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:13744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.083 [2024-07-14 20:25:35.246780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:49.083 [2024-07-14 20:25:35.246800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.083 [2024-07-14 20:25:35.246813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:49.083 [2024-07-14 20:25:35.246832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:13376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.083 [2024-07-14 20:25:35.246845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:49.083 [2024-07-14 20:25:35.246863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:14024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.083 [2024-07-14 20:25:35.246876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:49.083 [2024-07-14 20:25:35.246934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:13592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.083 [2024-07-14 20:25:35.246951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:49.083 [2024-07-14 20:25:35.246971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:13840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.083 [2024-07-14 20:25:35.246985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:49.083 [2024-07-14 20:25:35.247004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.083 [2024-07-14 20:25:35.247017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:49.083 [2024-07-14 20:25:35.247038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:13392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.083 [2024-07-14 20:25:35.247052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:49.083 [2024-07-14 20:25:35.247072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:13944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.083 [2024-07-14 20:25:35.247085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:49.083 [2024-07-14 20:25:35.247105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:14120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.083 [2024-07-14 20:25:35.247119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:49.083 [2024-07-14 20:25:35.247138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.083 [2024-07-14 20:25:35.247152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:49.083 [2024-07-14 20:25:35.247172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.083 [2024-07-14 20:25:35.247185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:49.083 [2024-07-14 20:25:35.248937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:14200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.083 [2024-07-14 20:25:35.248964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:49.083 [2024-07-14 20:25:35.249000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:14216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.083 [2024-07-14 20:25:35.249016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:49.083 [2024-07-14 20:25:35.249035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:14232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.083 [2024-07-14 20:25:35.249048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:49.083 [2024-07-14 20:25:35.249067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.083 [2024-07-14 20:25:35.249080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:49.083 [2024-07-14 20:25:35.249099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:14264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.083 [2024-07-14 20:25:35.249112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:49.083 [2024-07-14 20:25:35.249131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:14280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.083 [2024-07-14 20:25:35.249144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:49.083 [2024-07-14 20:25:35.249162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:14296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.083 [2024-07-14 20:25:35.249175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:49.083 [2024-07-14 20:25:35.249193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:14312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.083 [2024-07-14 20:25:35.249206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:49.083 [2024-07-14 20:25:35.249224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:14328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.083 [2024-07-14 20:25:35.249237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:49.083 [2024-07-14 20:25:35.249255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:14344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.083 [2024-07-14 20:25:35.249268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:49.083 [2024-07-14 20:25:35.249286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:14360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.083 [2024-07-14 20:25:35.249299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:49.083 [2024-07-14 20:25:35.249317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:14376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.083 [2024-07-14 20:25:35.249330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:49.083 [2024-07-14 20:25:35.249349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:14392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.083 [2024-07-14 20:25:35.249362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:49.083 [2024-07-14 20:25:35.249387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:14408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.083 [2024-07-14 20:25:35.249402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:49.083 [2024-07-14 20:25:35.249420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:14424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.083 [2024-07-14 20:25:35.249433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:49.083 [2024-07-14 20:25:35.249451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:14440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.083 [2024-07-14 20:25:35.249464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:49.083 [2024-07-14 20:25:35.249483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.083 [2024-07-14 20:25:35.249496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:49.083 [2024-07-14 20:25:35.249514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:14112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.083 [2024-07-14 20:25:35.249527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:49.083 [2024-07-14 20:25:35.249546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:14464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.083 [2024-07-14 20:25:35.249559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:49.083 [2024-07-14 20:25:35.249578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:14480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.083 [2024-07-14 20:25:35.249591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:49.083 [2024-07-14 20:25:35.249610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:14496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.083 [2024-07-14 20:25:35.249623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:49.083 [2024-07-14 20:25:35.249641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:14512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.083 [2024-07-14 20:25:35.249654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:49.083 [2024-07-14 20:25:35.249673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.083 [2024-07-14 20:25:35.249686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:49.083 [2024-07-14 20:25:35.249704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.084 [2024-07-14 20:25:35.249718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:49.084 [2024-07-14 20:25:35.249752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:13312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.084 [2024-07-14 20:25:35.249765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:49.084 [2024-07-14 20:25:35.249784] nvme_qpair.c: 243:nvme_io_qp 20:25:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:49.084 air_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:13744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.084 [2024-07-14 20:25:35.249804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:49.084 [2024-07-14 20:25:35.249824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:13376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.084 [2024-07-14 20:25:35.249838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:49.084 [2024-07-14 20:25:35.249857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.084 [2024-07-14 20:25:35.249900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:49.084 [2024-07-14 20:25:35.249923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:14176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.084 [2024-07-14 20:25:35.249937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:49.084 [2024-07-14 20:25:35.249957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:13944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.084 [2024-07-14 20:25:35.249971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:49.084 [2024-07-14 20:25:35.249991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:13688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.084 [2024-07-14 20:25:35.250004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:49.084 [2024-07-14 20:25:35.250025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:13672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.084 [2024-07-14 20:25:35.250038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:49.084 [2024-07-14 20:25:35.250956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.084 [2024-07-14 20:25:35.250983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:49.084 [2024-07-14 20:25:35.251008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:14192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.084 [2024-07-14 20:25:35.251023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:49.084 [2024-07-14 20:25:35.251044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:14520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.084 [2024-07-14 20:25:35.251059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:49.084 [2024-07-14 20:25:35.251079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:14536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.084 [2024-07-14 20:25:35.251093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:49.084 [2024-07-14 20:25:35.251113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:14552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.084 [2024-07-14 20:25:35.251126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:49.084 [2024-07-14 20:25:35.251146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:14568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.084 [2024-07-14 20:25:35.251170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:49.084 [2024-07-14 20:25:35.251192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:14584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.084 [2024-07-14 20:25:35.251206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:49.084 [2024-07-14 20:25:35.251240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:14600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.084 [2024-07-14 20:25:35.251254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:49.084 [2024-07-14 20:25:35.251287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:14616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.084 [2024-07-14 20:25:35.251300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:49.084 [2024-07-14 20:25:35.251318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:14632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.084 [2024-07-14 20:25:35.251331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:49.084 [2024-07-14 20:25:35.251350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:14096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.084 [2024-07-14 20:25:35.251363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:49.084 [2024-07-14 20:25:35.251381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:13864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.084 [2024-07-14 20:25:35.251394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:49.084 [2024-07-14 20:25:35.251412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:14216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.084 [2024-07-14 20:25:35.251425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:49.084 [2024-07-14 20:25:35.251444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:14248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.084 [2024-07-14 20:25:35.251457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:49.084 [2024-07-14 20:25:35.251475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:14280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.084 [2024-07-14 20:25:35.251488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:49.084 [2024-07-14 20:25:35.251506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:14312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.084 [2024-07-14 20:25:35.251519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:49.084 [2024-07-14 20:25:35.251537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.084 [2024-07-14 20:25:35.251550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:49.084 [2024-07-14 20:25:35.251568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:14376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.084 [2024-07-14 20:25:35.251581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:49.084 [2024-07-14 20:25:35.251608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:14408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.084 [2024-07-14 20:25:35.251622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:49.084 [2024-07-14 20:25:35.254763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:14440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.084 [2024-07-14 20:25:35.254828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:49.084 [2024-07-14 20:25:35.254934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:14112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.084 [2024-07-14 20:25:35.254957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:49.084 [2024-07-14 20:25:35.254980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.084 [2024-07-14 20:25:35.254995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:49.084 [2024-07-14 20:25:35.255015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:14512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.084 [2024-07-14 20:25:35.255030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:49.084 [2024-07-14 20:25:35.255050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.084 [2024-07-14 20:25:35.255064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:49.084 [2024-07-14 20:25:35.255084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:13744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.084 [2024-07-14 20:25:35.255099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:49.084 [2024-07-14 20:25:35.255119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:13592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.084 [2024-07-14 20:25:35.255133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:49.084 [2024-07-14 20:25:35.255153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:13944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.084 [2024-07-14 20:25:35.255168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:49.084 [2024-07-14 20:25:35.255188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.084 [2024-07-14 20:25:35.255203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:49.084 [2024-07-14 20:25:35.255223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.084 [2024-07-14 20:25:35.255237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:49.084 [2024-07-14 20:25:35.255258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:14256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.084 [2024-07-14 20:25:35.255302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:49.084 [2024-07-14 20:25:35.255336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:14288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.084 [2024-07-14 20:25:35.255352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:49.084 [2024-07-14 20:25:35.255373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:14320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.084 [2024-07-14 20:25:35.255387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:49.084 [2024-07-14 20:25:35.255407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:14352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.084 [2024-07-14 20:25:35.255421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:49.084 [2024-07-14 20:25:35.255442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:14384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.084 [2024-07-14 20:25:35.255470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:49.084 [2024-07-14 20:25:35.255490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:14416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.084 [2024-07-14 20:25:35.255504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:49.084 [2024-07-14 20:25:35.255524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:14448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.084 [2024-07-14 20:25:35.255537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:49.084 [2024-07-14 20:25:35.255572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.085 [2024-07-14 20:25:35.255585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:49.085 [2024-07-14 20:25:35.255605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:14024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.085 [2024-07-14 20:25:35.255618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:49.085 Received shutdown signal, test time was about 32.922164 seconds 00:25:49.085 00:25:49.085 Latency(us) 00:25:49.085 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:49.085 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:49.085 Verification LBA range: start 0x0 length 0x4000 00:25:49.085 Nvme0n1 : 32.92 9272.94 36.22 0.00 0.00 13776.02 271.83 4026531.84 00:25:49.085 =================================================================================================================== 00:25:49.085 Total : 9272.94 36.22 0.00 0.00 13776.02 271.83 4026531.84 00:25:49.342 20:25:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:25:49.342 20:25:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:25:49.342 20:25:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:25:49.342 20:25:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:49.342 20:25:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:25:49.600 20:25:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:49.600 20:25:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:25:49.600 20:25:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:49.600 20:25:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:49.600 rmmod nvme_tcp 00:25:49.600 rmmod nvme_fabrics 00:25:49.600 rmmod nvme_keyring 00:25:49.600 20:25:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:49.600 20:25:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:25:49.600 20:25:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:25:49.600 20:25:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 107797 ']' 00:25:49.600 20:25:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 107797 00:25:49.600 20:25:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 107797 ']' 00:25:49.600 20:25:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 107797 00:25:49.600 20:25:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:25:49.600 20:25:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:49.600 20:25:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 107797 00:25:49.600 killing process with pid 107797 00:25:49.600 20:25:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:49.600 20:25:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:49.600 20:25:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 107797' 00:25:49.600 20:25:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 107797 00:25:49.600 20:25:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 107797 00:25:49.923 20:25:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:49.923 20:25:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:49.923 20:25:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:49.923 20:25:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:49.923 20:25:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:49.923 20:25:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:49.923 20:25:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:49.923 20:25:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:49.923 20:25:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:25:49.923 00:25:49.923 real 0m39.023s 00:25:49.923 user 2m6.373s 00:25:49.923 sys 0m10.028s 00:25:49.923 20:25:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:49.923 ************************************ 00:25:49.923 20:25:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:49.923 END TEST nvmf_host_multipath_status 00:25:49.923 ************************************ 00:25:49.923 20:25:38 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:49.923 20:25:38 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:25:49.923 20:25:38 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:49.923 20:25:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:49.923 ************************************ 00:25:49.923 START TEST nvmf_discovery_remove_ifc 00:25:49.923 ************************************ 00:25:49.923 20:25:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:50.183 * Looking for test storage... 00:25:50.183 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:25:50.183 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:50.183 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:25:50.183 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:50.183 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:50.183 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:50.183 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:50.183 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:50.183 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:50.183 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:50.183 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:50.183 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:50.183 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:50.183 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:25:50.183 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:25:50.183 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:50.183 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:50.183 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:50.183 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:50.183 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:50.183 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:50.183 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:50.183 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:50.183 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:50.183 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:50.183 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:50.183 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:25:50.183 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:50.183 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:25:50.183 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:50.183 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:50.183 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:50.183 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:50.183 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:50.183 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:50.183 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:50.183 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:50.183 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:25:50.183 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:25:50.183 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:25:50.183 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:25:50.183 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:25:50.183 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:25:50.183 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:25:50.183 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:50.183 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:50.183 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:50.183 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:50.183 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:50.183 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:50.183 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:50.183 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:50.183 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:25:50.183 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:25:50.183 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:25:50.183 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:25:50.183 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:25:50.183 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:25:50.183 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:50.183 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:50.183 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:50.183 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:25:50.183 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:50.183 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:50.183 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:50.183 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:50.183 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:50.183 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:50.183 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:50.183 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:50.183 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:25:50.183 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:25:50.183 Cannot find device "nvmf_tgt_br" 00:25:50.183 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # true 00:25:50.183 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:25:50.183 Cannot find device "nvmf_tgt_br2" 00:25:50.183 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # true 00:25:50.183 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:25:50.183 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:25:50.183 Cannot find device "nvmf_tgt_br" 00:25:50.183 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # true 00:25:50.183 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:25:50.183 Cannot find device "nvmf_tgt_br2" 00:25:50.183 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # true 00:25:50.183 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:25:50.183 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:25:50.183 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:50.183 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:50.183 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:25:50.183 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:50.183 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:50.183 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:25:50.183 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:25:50.183 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:50.183 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:50.183 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:50.183 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:50.183 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:50.183 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:50.183 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:50.183 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:50.183 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:25:50.183 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:25:50.183 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:25:50.442 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:25:50.442 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:50.442 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:50.442 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:50.442 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:25:50.442 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:25:50.442 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:25:50.442 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:50.442 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:50.442 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:50.442 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:50.442 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:25:50.442 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:50.442 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:25:50.442 00:25:50.442 --- 10.0.0.2 ping statistics --- 00:25:50.442 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:50.442 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:25:50.442 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:25:50.442 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:50.442 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:25:50.442 00:25:50.442 --- 10.0.0.3 ping statistics --- 00:25:50.442 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:50.442 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:25:50.442 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:50.442 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:50.442 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:25:50.442 00:25:50.442 --- 10.0.0.1 ping statistics --- 00:25:50.442 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:50.442 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:25:50.442 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:50.442 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@433 -- # return 0 00:25:50.442 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:50.442 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:50.442 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:50.442 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:50.442 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:50.442 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:50.442 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:50.442 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:25:50.442 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:50.442 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:50.442 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:50.442 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=109186 00:25:50.442 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 109186 00:25:50.442 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 109186 ']' 00:25:50.442 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:50.442 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:50.442 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:50.442 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:50.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:50.442 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:50.442 20:25:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:50.442 [2024-07-14 20:25:39.447875] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:25:50.442 [2024-07-14 20:25:39.447973] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:50.701 [2024-07-14 20:25:39.586709] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:50.701 [2024-07-14 20:25:39.683079] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:50.701 [2024-07-14 20:25:39.683146] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:50.701 [2024-07-14 20:25:39.683157] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:50.701 [2024-07-14 20:25:39.683166] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:50.701 [2024-07-14 20:25:39.683174] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:50.701 [2024-07-14 20:25:39.683199] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:51.635 20:25:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:51.635 20:25:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:25:51.635 20:25:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:51.635 20:25:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:51.635 20:25:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:51.635 20:25:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:51.635 20:25:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:25:51.635 20:25:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.635 20:25:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:51.635 [2024-07-14 20:25:40.514318] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:51.635 [2024-07-14 20:25:40.522426] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:51.635 null0 00:25:51.635 [2024-07-14 20:25:40.554304] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:51.635 20:25:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.635 20:25:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=109235 00:25:51.635 20:25:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 109235 /tmp/host.sock 00:25:51.635 20:25:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 109235 ']' 00:25:51.636 20:25:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:25:51.636 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:51.636 20:25:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:51.636 20:25:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:25:51.636 20:25:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:51.636 20:25:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:51.636 20:25:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:51.636 [2024-07-14 20:25:40.628451] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:25:51.636 [2024-07-14 20:25:40.628541] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109235 ] 00:25:51.894 [2024-07-14 20:25:40.764189] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:51.894 [2024-07-14 20:25:40.850747] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:52.829 20:25:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:52.829 20:25:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:25:52.829 20:25:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:52.829 20:25:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:25:52.829 20:25:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.829 20:25:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:52.829 20:25:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.829 20:25:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:25:52.829 20:25:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.830 20:25:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:52.830 20:25:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.830 20:25:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:25:52.830 20:25:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.830 20:25:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:53.765 [2024-07-14 20:25:42.807375] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:53.765 [2024-07-14 20:25:42.807402] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:53.765 [2024-07-14 20:25:42.807420] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:54.024 [2024-07-14 20:25:42.893489] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:54.024 [2024-07-14 20:25:42.949464] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:54.025 [2024-07-14 20:25:42.949525] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:54.025 [2024-07-14 20:25:42.949553] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:54.025 [2024-07-14 20:25:42.949570] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:54.025 [2024-07-14 20:25:42.949597] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:54.025 20:25:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.025 20:25:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:25:54.025 20:25:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:54.025 20:25:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:54.025 [2024-07-14 20:25:42.955979] bdev_nvme.c:1614:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x23ee870 was disconnected and freed. delete nvme_qpair. 00:25:54.025 20:25:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:54.025 20:25:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.025 20:25:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:54.025 20:25:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:54.025 20:25:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:54.025 20:25:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.025 20:25:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:25:54.025 20:25:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:25:54.025 20:25:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:25:54.025 20:25:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:25:54.025 20:25:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:54.025 20:25:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:54.025 20:25:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.025 20:25:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:54.025 20:25:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:54.025 20:25:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:54.025 20:25:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:54.025 20:25:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.025 20:25:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:54.025 20:25:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:55.404 20:25:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:55.404 20:25:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:55.404 20:25:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:55.404 20:25:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.404 20:25:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:55.404 20:25:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:55.404 20:25:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:55.404 20:25:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.404 20:25:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:55.404 20:25:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:56.337 20:25:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:56.337 20:25:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:56.337 20:25:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:56.337 20:25:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.337 20:25:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:56.337 20:25:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:56.337 20:25:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:56.337 20:25:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.337 20:25:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:56.337 20:25:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:57.276 20:25:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:57.276 20:25:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:57.276 20:25:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:57.276 20:25:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.276 20:25:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:57.276 20:25:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:57.276 20:25:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:57.276 20:25:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.276 20:25:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:57.276 20:25:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:58.210 20:25:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:58.210 20:25:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:58.210 20:25:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.210 20:25:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:58.210 20:25:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:58.210 20:25:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:58.210 20:25:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:58.468 20:25:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.468 20:25:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:58.468 20:25:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:59.405 20:25:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:59.405 20:25:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:59.405 20:25:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:59.405 20:25:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.405 20:25:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:59.405 20:25:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:59.405 20:25:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:59.405 20:25:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.405 [2024-07-14 20:25:48.377274] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:25:59.405 [2024-07-14 20:25:48.377339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:59.405 [2024-07-14 20:25:48.377354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.405 [2024-07-14 20:25:48.377367] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:59.405 [2024-07-14 20:25:48.377376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.405 [2024-07-14 20:25:48.377386] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:59.405 [2024-07-14 20:25:48.377395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.405 [2024-07-14 20:25:48.377403] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:59.405 [2024-07-14 20:25:48.377411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.405 [2024-07-14 20:25:48.377421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:59.405 [2024-07-14 20:25:48.377430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.405 [2024-07-14 20:25:48.377439] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b5d40 is same with the state(5) to be set 00:25:59.405 20:25:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:59.405 20:25:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:59.405 [2024-07-14 20:25:48.387270] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23b5d40 (9): Bad file descriptor 00:25:59.405 [2024-07-14 20:25:48.397307] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:00.343 20:25:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:00.343 20:25:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:00.343 20:25:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:00.343 20:25:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:00.343 20:25:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.343 20:25:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:00.343 20:25:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:00.343 [2024-07-14 20:25:49.426988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:26:00.343 [2024-07-14 20:25:49.427055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b5d40 with addr=10.0.0.2, port=4420 00:26:00.343 [2024-07-14 20:25:49.427078] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b5d40 is same with the state(5) to be set 00:26:00.343 [2024-07-14 20:25:49.427118] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23b5d40 (9): Bad file descriptor 00:26:00.343 [2024-07-14 20:25:49.427704] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:00.343 [2024-07-14 20:25:49.427748] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:00.343 [2024-07-14 20:25:49.427764] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:00.343 [2024-07-14 20:25:49.427782] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:00.343 [2024-07-14 20:25:49.427822] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.343 [2024-07-14 20:25:49.427839] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:00.602 20:25:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.602 20:25:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:00.602 20:25:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:01.538 [2024-07-14 20:25:50.427920] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:01.538 [2024-07-14 20:25:50.427972] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:01.538 [2024-07-14 20:25:50.427994] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:01.538 [2024-07-14 20:25:50.428005] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:26:01.538 [2024-07-14 20:25:50.428032] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.538 [2024-07-14 20:25:50.428066] bdev_nvme.c:6735:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:26:01.538 [2024-07-14 20:25:50.428128] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.538 [2024-07-14 20:25:50.428144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.538 [2024-07-14 20:25:50.428160] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.538 [2024-07-14 20:25:50.428169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.538 [2024-07-14 20:25:50.428179] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.538 [2024-07-14 20:25:50.428188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.538 [2024-07-14 20:25:50.428198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.538 [2024-07-14 20:25:50.428206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.538 [2024-07-14 20:25:50.428216] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.538 [2024-07-14 20:25:50.428224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.538 [2024-07-14 20:25:50.428234] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:26:01.538 [2024-07-14 20:25:50.428286] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23b51d0 (9): Bad file descriptor 00:26:01.538 [2024-07-14 20:25:50.429276] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:26:01.538 [2024-07-14 20:25:50.429299] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:26:01.538 20:25:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:01.538 20:25:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:01.538 20:25:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:01.538 20:25:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.538 20:25:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:01.538 20:25:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:01.538 20:25:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:01.538 20:25:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.538 20:25:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:26:01.538 20:25:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:01.538 20:25:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:01.538 20:25:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:26:01.538 20:25:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:01.538 20:25:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:01.538 20:25:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:01.538 20:25:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:01.538 20:25:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:01.538 20:25:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.538 20:25:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:01.538 20:25:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.538 20:25:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:01.538 20:25:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:02.916 20:25:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:02.916 20:25:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:02.916 20:25:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.916 20:25:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:02.916 20:25:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:02.916 20:25:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:02.916 20:25:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:02.916 20:25:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.916 20:25:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:02.916 20:25:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:03.484 [2024-07-14 20:25:52.438749] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:03.484 [2024-07-14 20:25:52.438789] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:03.484 [2024-07-14 20:25:52.438807] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:03.484 [2024-07-14 20:25:52.524838] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:26:03.743 [2024-07-14 20:25:52.579984] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:03.743 [2024-07-14 20:25:52.580029] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:03.743 [2024-07-14 20:25:52.580052] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:03.743 [2024-07-14 20:25:52.580069] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:26:03.743 [2024-07-14 20:25:52.580077] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:03.743 [2024-07-14 20:25:52.587368] bdev_nvme.c:1614:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x23f9340 was disconnected and freed. delete nvme_qpair. 00:26:03.743 20:25:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:03.743 20:25:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:03.743 20:25:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:03.743 20:25:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.743 20:25:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:03.743 20:25:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:03.743 20:25:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:03.743 20:25:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.743 20:25:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:26:03.743 20:25:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:26:03.743 20:25:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 109235 00:26:03.743 20:25:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 109235 ']' 00:26:03.743 20:25:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 109235 00:26:03.743 20:25:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:26:03.743 20:25:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:03.743 20:25:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 109235 00:26:03.743 killing process with pid 109235 00:26:03.743 20:25:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:26:03.743 20:25:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:26:03.743 20:25:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 109235' 00:26:03.743 20:25:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 109235 00:26:03.743 20:25:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 109235 00:26:04.002 20:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:26:04.002 20:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:04.002 20:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:26:04.261 20:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:04.261 20:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:26:04.261 20:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:04.261 20:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:04.261 rmmod nvme_tcp 00:26:04.261 rmmod nvme_fabrics 00:26:04.261 rmmod nvme_keyring 00:26:04.261 20:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:04.261 20:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:26:04.261 20:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:26:04.261 20:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 109186 ']' 00:26:04.261 20:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 109186 00:26:04.261 20:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 109186 ']' 00:26:04.261 20:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 109186 00:26:04.261 20:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:26:04.261 20:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:04.261 20:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 109186 00:26:04.261 killing process with pid 109186 00:26:04.261 20:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:26:04.261 20:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:26:04.261 20:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 109186' 00:26:04.261 20:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 109186 00:26:04.261 20:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 109186 00:26:04.519 20:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:04.519 20:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:04.519 20:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:04.519 20:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:04.519 20:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:04.519 20:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:04.519 20:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:04.519 20:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:04.519 20:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:26:04.519 00:26:04.519 real 0m14.553s 00:26:04.519 user 0m26.041s 00:26:04.519 sys 0m1.749s 00:26:04.519 20:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:04.519 20:25:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:04.519 ************************************ 00:26:04.519 END TEST nvmf_discovery_remove_ifc 00:26:04.519 ************************************ 00:26:04.519 20:25:53 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:04.519 20:25:53 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:26:04.519 20:25:53 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:04.519 20:25:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:04.519 ************************************ 00:26:04.519 START TEST nvmf_identify_kernel_target 00:26:04.519 ************************************ 00:26:04.519 20:25:53 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:04.778 * Looking for test storage... 00:26:04.778 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:04.778 20:25:53 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:04.778 20:25:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:26:04.778 20:25:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:04.778 20:25:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:04.778 20:25:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:04.778 20:25:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:04.778 20:25:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:04.778 20:25:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:04.778 20:25:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:04.778 20:25:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:04.778 20:25:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:04.778 20:25:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:04.778 20:25:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:26:04.778 20:25:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:26:04.778 20:25:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:04.778 20:25:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:04.778 20:25:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:04.778 20:25:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:04.778 20:25:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:04.778 20:25:53 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:04.778 20:25:53 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:04.778 20:25:53 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:04.778 20:25:53 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:04.779 20:25:53 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:04.779 20:25:53 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:04.779 20:25:53 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:26:04.779 20:25:53 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:04.779 20:25:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:26:04.779 20:25:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:04.779 20:25:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:04.779 20:25:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:04.779 20:25:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:04.779 20:25:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:04.779 20:25:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:04.779 20:25:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:04.779 20:25:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:04.779 20:25:53 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:26:04.779 20:25:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:04.779 20:25:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:04.779 20:25:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:04.779 20:25:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:04.779 20:25:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:04.779 20:25:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:04.779 20:25:53 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:04.779 20:25:53 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:04.779 20:25:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:26:04.779 20:25:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:26:04.779 20:25:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:26:04.779 20:25:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:26:04.779 20:25:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:26:04.779 20:25:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:26:04.779 20:25:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:04.779 20:25:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:04.779 20:25:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:04.779 20:25:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:26:04.779 20:25:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:04.779 20:25:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:04.779 20:25:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:04.779 20:25:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:04.779 20:25:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:04.779 20:25:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:04.779 20:25:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:04.779 20:25:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:04.779 20:25:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:26:04.779 20:25:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:26:04.779 Cannot find device "nvmf_tgt_br" 00:26:04.779 20:25:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # true 00:26:04.779 20:25:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:26:04.779 Cannot find device "nvmf_tgt_br2" 00:26:04.779 20:25:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # true 00:26:04.779 20:25:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:26:04.779 20:25:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:26:04.779 Cannot find device "nvmf_tgt_br" 00:26:04.779 20:25:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # true 00:26:04.779 20:25:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:26:04.779 Cannot find device "nvmf_tgt_br2" 00:26:04.779 20:25:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # true 00:26:04.779 20:25:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:26:04.779 20:25:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:26:04.779 20:25:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:04.779 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:04.779 20:25:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:26:04.779 20:25:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:04.779 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:04.779 20:25:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:26:04.779 20:25:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:26:04.779 20:25:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:04.779 20:25:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:04.779 20:25:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:05.038 20:25:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:05.038 20:25:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:05.038 20:25:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:05.038 20:25:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:05.038 20:25:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:05.038 20:25:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:26:05.038 20:25:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:26:05.038 20:25:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:26:05.038 20:25:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:26:05.038 20:25:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:05.038 20:25:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:05.038 20:25:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:05.038 20:25:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:26:05.038 20:25:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:26:05.038 20:25:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:26:05.038 20:25:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:05.038 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:05.038 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:05.038 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:05.038 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:26:05.038 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:05.038 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.092 ms 00:26:05.038 00:26:05.038 --- 10.0.0.2 ping statistics --- 00:26:05.038 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:05.038 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:26:05.038 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:26:05.038 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:05.038 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:26:05.038 00:26:05.038 --- 10.0.0.3 ping statistics --- 00:26:05.038 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:05.038 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:26:05.038 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:05.038 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:05.038 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:26:05.038 00:26:05.038 --- 10.0.0.1 ping statistics --- 00:26:05.038 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:05.038 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:26:05.038 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:05.038 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@433 -- # return 0 00:26:05.038 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:05.038 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:05.038 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:05.038 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:05.038 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:05.038 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:05.038 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:05.038 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:26:05.038 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:26:05.038 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:26:05.038 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:05.038 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:05.038 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:05.038 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:05.038 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:05.038 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:05.038 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:05.038 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:05.038 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:05.038 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:26:05.038 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:26:05.038 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:26:05.038 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:26:05.039 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:05.039 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:05.039 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:05.039 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:26:05.039 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:26:05.039 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:26:05.039 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:05.039 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:26:05.607 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:05.607 Waiting for block devices as requested 00:26:05.607 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:26:05.607 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:26:05.607 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:26:05.607 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:05.607 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:26:05.607 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:26:05.607 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:05.607 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:26:05.607 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:26:05.607 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:26:05.607 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:26:05.867 No valid GPT data, bailing 00:26:05.867 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:05.867 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:26:05.867 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:26:05.867 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:26:05.867 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:26:05.867 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:26:05.867 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:26:05.867 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # local device=nvme0n2 00:26:05.867 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:26:05.867 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:26:05.867 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:26:05.867 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:26:05.867 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:26:05.867 No valid GPT data, bailing 00:26:05.867 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:26:05.867 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:26:05.867 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:26:05.867 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:26:05.867 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:26:05.867 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:26:05.867 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:26:05.867 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # local device=nvme0n3 00:26:05.867 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:26:05.867 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:26:05.867 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:26:05.867 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:26:05.867 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:26:05.867 No valid GPT data, bailing 00:26:05.867 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:26:05.867 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:26:05.867 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:26:05.867 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:26:05.867 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:26:05.867 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:26:05.867 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:26:05.867 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:26:05.867 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:26:05.867 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:26:05.867 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:26:05.867 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:26:05.867 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:26:06.127 No valid GPT data, bailing 00:26:06.127 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:26:06.128 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:26:06.128 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:26:06.128 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:26:06.128 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:26:06.128 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:06.128 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:06.128 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:06.128 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:26:06.128 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:26:06.128 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:26:06.128 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:26:06.128 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:26:06.128 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:26:06.128 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:26:06.128 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:26:06.128 20:25:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:06.128 20:25:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid=caa3dfc1-79db-49e7-95fe-b9f6785698c4 -a 10.0.0.1 -t tcp -s 4420 00:26:06.128 00:26:06.128 Discovery Log Number of Records 2, Generation counter 2 00:26:06.128 =====Discovery Log Entry 0====== 00:26:06.128 trtype: tcp 00:26:06.128 adrfam: ipv4 00:26:06.128 subtype: current discovery subsystem 00:26:06.128 treq: not specified, sq flow control disable supported 00:26:06.128 portid: 1 00:26:06.128 trsvcid: 4420 00:26:06.128 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:06.128 traddr: 10.0.0.1 00:26:06.128 eflags: none 00:26:06.128 sectype: none 00:26:06.128 =====Discovery Log Entry 1====== 00:26:06.128 trtype: tcp 00:26:06.128 adrfam: ipv4 00:26:06.128 subtype: nvme subsystem 00:26:06.128 treq: not specified, sq flow control disable supported 00:26:06.128 portid: 1 00:26:06.128 trsvcid: 4420 00:26:06.128 subnqn: nqn.2016-06.io.spdk:testnqn 00:26:06.128 traddr: 10.0.0.1 00:26:06.128 eflags: none 00:26:06.128 sectype: none 00:26:06.128 20:25:55 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:26:06.128 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:26:06.128 ===================================================== 00:26:06.128 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:26:06.128 ===================================================== 00:26:06.128 Controller Capabilities/Features 00:26:06.128 ================================ 00:26:06.128 Vendor ID: 0000 00:26:06.128 Subsystem Vendor ID: 0000 00:26:06.128 Serial Number: 5889752af10a5136f692 00:26:06.128 Model Number: Linux 00:26:06.128 Firmware Version: 6.7.0-68 00:26:06.128 Recommended Arb Burst: 0 00:26:06.128 IEEE OUI Identifier: 00 00 00 00:26:06.128 Multi-path I/O 00:26:06.128 May have multiple subsystem ports: No 00:26:06.128 May have multiple controllers: No 00:26:06.128 Associated with SR-IOV VF: No 00:26:06.128 Max Data Transfer Size: Unlimited 00:26:06.128 Max Number of Namespaces: 0 00:26:06.128 Max Number of I/O Queues: 1024 00:26:06.128 NVMe Specification Version (VS): 1.3 00:26:06.128 NVMe Specification Version (Identify): 1.3 00:26:06.128 Maximum Queue Entries: 1024 00:26:06.128 Contiguous Queues Required: No 00:26:06.128 Arbitration Mechanisms Supported 00:26:06.128 Weighted Round Robin: Not Supported 00:26:06.128 Vendor Specific: Not Supported 00:26:06.128 Reset Timeout: 7500 ms 00:26:06.128 Doorbell Stride: 4 bytes 00:26:06.128 NVM Subsystem Reset: Not Supported 00:26:06.128 Command Sets Supported 00:26:06.128 NVM Command Set: Supported 00:26:06.128 Boot Partition: Not Supported 00:26:06.128 Memory Page Size Minimum: 4096 bytes 00:26:06.128 Memory Page Size Maximum: 4096 bytes 00:26:06.128 Persistent Memory Region: Not Supported 00:26:06.128 Optional Asynchronous Events Supported 00:26:06.128 Namespace Attribute Notices: Not Supported 00:26:06.128 Firmware Activation Notices: Not Supported 00:26:06.128 ANA Change Notices: Not Supported 00:26:06.128 PLE Aggregate Log Change Notices: Not Supported 00:26:06.128 LBA Status Info Alert Notices: Not Supported 00:26:06.128 EGE Aggregate Log Change Notices: Not Supported 00:26:06.128 Normal NVM Subsystem Shutdown event: Not Supported 00:26:06.128 Zone Descriptor Change Notices: Not Supported 00:26:06.128 Discovery Log Change Notices: Supported 00:26:06.128 Controller Attributes 00:26:06.128 128-bit Host Identifier: Not Supported 00:26:06.128 Non-Operational Permissive Mode: Not Supported 00:26:06.128 NVM Sets: Not Supported 00:26:06.128 Read Recovery Levels: Not Supported 00:26:06.128 Endurance Groups: Not Supported 00:26:06.128 Predictable Latency Mode: Not Supported 00:26:06.128 Traffic Based Keep ALive: Not Supported 00:26:06.128 Namespace Granularity: Not Supported 00:26:06.128 SQ Associations: Not Supported 00:26:06.128 UUID List: Not Supported 00:26:06.128 Multi-Domain Subsystem: Not Supported 00:26:06.128 Fixed Capacity Management: Not Supported 00:26:06.128 Variable Capacity Management: Not Supported 00:26:06.128 Delete Endurance Group: Not Supported 00:26:06.128 Delete NVM Set: Not Supported 00:26:06.128 Extended LBA Formats Supported: Not Supported 00:26:06.128 Flexible Data Placement Supported: Not Supported 00:26:06.128 00:26:06.128 Controller Memory Buffer Support 00:26:06.128 ================================ 00:26:06.128 Supported: No 00:26:06.128 00:26:06.128 Persistent Memory Region Support 00:26:06.128 ================================ 00:26:06.128 Supported: No 00:26:06.128 00:26:06.128 Admin Command Set Attributes 00:26:06.128 ============================ 00:26:06.128 Security Send/Receive: Not Supported 00:26:06.128 Format NVM: Not Supported 00:26:06.128 Firmware Activate/Download: Not Supported 00:26:06.128 Namespace Management: Not Supported 00:26:06.128 Device Self-Test: Not Supported 00:26:06.128 Directives: Not Supported 00:26:06.128 NVMe-MI: Not Supported 00:26:06.128 Virtualization Management: Not Supported 00:26:06.128 Doorbell Buffer Config: Not Supported 00:26:06.128 Get LBA Status Capability: Not Supported 00:26:06.128 Command & Feature Lockdown Capability: Not Supported 00:26:06.128 Abort Command Limit: 1 00:26:06.128 Async Event Request Limit: 1 00:26:06.128 Number of Firmware Slots: N/A 00:26:06.128 Firmware Slot 1 Read-Only: N/A 00:26:06.389 Firmware Activation Without Reset: N/A 00:26:06.389 Multiple Update Detection Support: N/A 00:26:06.389 Firmware Update Granularity: No Information Provided 00:26:06.389 Per-Namespace SMART Log: No 00:26:06.389 Asymmetric Namespace Access Log Page: Not Supported 00:26:06.389 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:26:06.389 Command Effects Log Page: Not Supported 00:26:06.389 Get Log Page Extended Data: Supported 00:26:06.389 Telemetry Log Pages: Not Supported 00:26:06.389 Persistent Event Log Pages: Not Supported 00:26:06.389 Supported Log Pages Log Page: May Support 00:26:06.389 Commands Supported & Effects Log Page: Not Supported 00:26:06.389 Feature Identifiers & Effects Log Page:May Support 00:26:06.389 NVMe-MI Commands & Effects Log Page: May Support 00:26:06.389 Data Area 4 for Telemetry Log: Not Supported 00:26:06.389 Error Log Page Entries Supported: 1 00:26:06.389 Keep Alive: Not Supported 00:26:06.389 00:26:06.389 NVM Command Set Attributes 00:26:06.389 ========================== 00:26:06.389 Submission Queue Entry Size 00:26:06.389 Max: 1 00:26:06.389 Min: 1 00:26:06.389 Completion Queue Entry Size 00:26:06.389 Max: 1 00:26:06.389 Min: 1 00:26:06.389 Number of Namespaces: 0 00:26:06.389 Compare Command: Not Supported 00:26:06.389 Write Uncorrectable Command: Not Supported 00:26:06.389 Dataset Management Command: Not Supported 00:26:06.389 Write Zeroes Command: Not Supported 00:26:06.389 Set Features Save Field: Not Supported 00:26:06.389 Reservations: Not Supported 00:26:06.389 Timestamp: Not Supported 00:26:06.389 Copy: Not Supported 00:26:06.389 Volatile Write Cache: Not Present 00:26:06.389 Atomic Write Unit (Normal): 1 00:26:06.389 Atomic Write Unit (PFail): 1 00:26:06.389 Atomic Compare & Write Unit: 1 00:26:06.389 Fused Compare & Write: Not Supported 00:26:06.389 Scatter-Gather List 00:26:06.389 SGL Command Set: Supported 00:26:06.389 SGL Keyed: Not Supported 00:26:06.389 SGL Bit Bucket Descriptor: Not Supported 00:26:06.389 SGL Metadata Pointer: Not Supported 00:26:06.389 Oversized SGL: Not Supported 00:26:06.389 SGL Metadata Address: Not Supported 00:26:06.389 SGL Offset: Supported 00:26:06.389 Transport SGL Data Block: Not Supported 00:26:06.389 Replay Protected Memory Block: Not Supported 00:26:06.389 00:26:06.389 Firmware Slot Information 00:26:06.389 ========================= 00:26:06.389 Active slot: 0 00:26:06.389 00:26:06.389 00:26:06.389 Error Log 00:26:06.389 ========= 00:26:06.389 00:26:06.389 Active Namespaces 00:26:06.389 ================= 00:26:06.389 Discovery Log Page 00:26:06.389 ================== 00:26:06.389 Generation Counter: 2 00:26:06.389 Number of Records: 2 00:26:06.389 Record Format: 0 00:26:06.389 00:26:06.389 Discovery Log Entry 0 00:26:06.389 ---------------------- 00:26:06.389 Transport Type: 3 (TCP) 00:26:06.389 Address Family: 1 (IPv4) 00:26:06.389 Subsystem Type: 3 (Current Discovery Subsystem) 00:26:06.389 Entry Flags: 00:26:06.389 Duplicate Returned Information: 0 00:26:06.389 Explicit Persistent Connection Support for Discovery: 0 00:26:06.389 Transport Requirements: 00:26:06.389 Secure Channel: Not Specified 00:26:06.389 Port ID: 1 (0x0001) 00:26:06.389 Controller ID: 65535 (0xffff) 00:26:06.389 Admin Max SQ Size: 32 00:26:06.389 Transport Service Identifier: 4420 00:26:06.389 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:26:06.389 Transport Address: 10.0.0.1 00:26:06.389 Discovery Log Entry 1 00:26:06.389 ---------------------- 00:26:06.389 Transport Type: 3 (TCP) 00:26:06.389 Address Family: 1 (IPv4) 00:26:06.389 Subsystem Type: 2 (NVM Subsystem) 00:26:06.389 Entry Flags: 00:26:06.389 Duplicate Returned Information: 0 00:26:06.389 Explicit Persistent Connection Support for Discovery: 0 00:26:06.389 Transport Requirements: 00:26:06.389 Secure Channel: Not Specified 00:26:06.389 Port ID: 1 (0x0001) 00:26:06.389 Controller ID: 65535 (0xffff) 00:26:06.389 Admin Max SQ Size: 32 00:26:06.389 Transport Service Identifier: 4420 00:26:06.389 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:26:06.389 Transport Address: 10.0.0.1 00:26:06.389 20:25:55 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:06.389 get_feature(0x01) failed 00:26:06.389 get_feature(0x02) failed 00:26:06.389 get_feature(0x04) failed 00:26:06.389 ===================================================== 00:26:06.389 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:06.389 ===================================================== 00:26:06.389 Controller Capabilities/Features 00:26:06.389 ================================ 00:26:06.389 Vendor ID: 0000 00:26:06.389 Subsystem Vendor ID: 0000 00:26:06.389 Serial Number: 1f2c9ce91b33cbdef27b 00:26:06.389 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:26:06.389 Firmware Version: 6.7.0-68 00:26:06.389 Recommended Arb Burst: 6 00:26:06.389 IEEE OUI Identifier: 00 00 00 00:26:06.389 Multi-path I/O 00:26:06.389 May have multiple subsystem ports: Yes 00:26:06.389 May have multiple controllers: Yes 00:26:06.389 Associated with SR-IOV VF: No 00:26:06.389 Max Data Transfer Size: Unlimited 00:26:06.389 Max Number of Namespaces: 1024 00:26:06.389 Max Number of I/O Queues: 128 00:26:06.389 NVMe Specification Version (VS): 1.3 00:26:06.389 NVMe Specification Version (Identify): 1.3 00:26:06.389 Maximum Queue Entries: 1024 00:26:06.389 Contiguous Queues Required: No 00:26:06.389 Arbitration Mechanisms Supported 00:26:06.389 Weighted Round Robin: Not Supported 00:26:06.389 Vendor Specific: Not Supported 00:26:06.389 Reset Timeout: 7500 ms 00:26:06.389 Doorbell Stride: 4 bytes 00:26:06.389 NVM Subsystem Reset: Not Supported 00:26:06.389 Command Sets Supported 00:26:06.389 NVM Command Set: Supported 00:26:06.389 Boot Partition: Not Supported 00:26:06.389 Memory Page Size Minimum: 4096 bytes 00:26:06.389 Memory Page Size Maximum: 4096 bytes 00:26:06.389 Persistent Memory Region: Not Supported 00:26:06.389 Optional Asynchronous Events Supported 00:26:06.389 Namespace Attribute Notices: Supported 00:26:06.389 Firmware Activation Notices: Not Supported 00:26:06.389 ANA Change Notices: Supported 00:26:06.389 PLE Aggregate Log Change Notices: Not Supported 00:26:06.389 LBA Status Info Alert Notices: Not Supported 00:26:06.389 EGE Aggregate Log Change Notices: Not Supported 00:26:06.389 Normal NVM Subsystem Shutdown event: Not Supported 00:26:06.389 Zone Descriptor Change Notices: Not Supported 00:26:06.389 Discovery Log Change Notices: Not Supported 00:26:06.389 Controller Attributes 00:26:06.389 128-bit Host Identifier: Supported 00:26:06.389 Non-Operational Permissive Mode: Not Supported 00:26:06.389 NVM Sets: Not Supported 00:26:06.389 Read Recovery Levels: Not Supported 00:26:06.389 Endurance Groups: Not Supported 00:26:06.389 Predictable Latency Mode: Not Supported 00:26:06.389 Traffic Based Keep ALive: Supported 00:26:06.389 Namespace Granularity: Not Supported 00:26:06.389 SQ Associations: Not Supported 00:26:06.390 UUID List: Not Supported 00:26:06.390 Multi-Domain Subsystem: Not Supported 00:26:06.390 Fixed Capacity Management: Not Supported 00:26:06.390 Variable Capacity Management: Not Supported 00:26:06.390 Delete Endurance Group: Not Supported 00:26:06.390 Delete NVM Set: Not Supported 00:26:06.390 Extended LBA Formats Supported: Not Supported 00:26:06.390 Flexible Data Placement Supported: Not Supported 00:26:06.390 00:26:06.390 Controller Memory Buffer Support 00:26:06.390 ================================ 00:26:06.390 Supported: No 00:26:06.390 00:26:06.390 Persistent Memory Region Support 00:26:06.390 ================================ 00:26:06.390 Supported: No 00:26:06.390 00:26:06.390 Admin Command Set Attributes 00:26:06.390 ============================ 00:26:06.390 Security Send/Receive: Not Supported 00:26:06.390 Format NVM: Not Supported 00:26:06.390 Firmware Activate/Download: Not Supported 00:26:06.390 Namespace Management: Not Supported 00:26:06.390 Device Self-Test: Not Supported 00:26:06.390 Directives: Not Supported 00:26:06.390 NVMe-MI: Not Supported 00:26:06.390 Virtualization Management: Not Supported 00:26:06.390 Doorbell Buffer Config: Not Supported 00:26:06.390 Get LBA Status Capability: Not Supported 00:26:06.390 Command & Feature Lockdown Capability: Not Supported 00:26:06.390 Abort Command Limit: 4 00:26:06.390 Async Event Request Limit: 4 00:26:06.390 Number of Firmware Slots: N/A 00:26:06.390 Firmware Slot 1 Read-Only: N/A 00:26:06.390 Firmware Activation Without Reset: N/A 00:26:06.390 Multiple Update Detection Support: N/A 00:26:06.390 Firmware Update Granularity: No Information Provided 00:26:06.390 Per-Namespace SMART Log: Yes 00:26:06.390 Asymmetric Namespace Access Log Page: Supported 00:26:06.390 ANA Transition Time : 10 sec 00:26:06.390 00:26:06.390 Asymmetric Namespace Access Capabilities 00:26:06.390 ANA Optimized State : Supported 00:26:06.390 ANA Non-Optimized State : Supported 00:26:06.390 ANA Inaccessible State : Supported 00:26:06.390 ANA Persistent Loss State : Supported 00:26:06.390 ANA Change State : Supported 00:26:06.390 ANAGRPID is not changed : No 00:26:06.390 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:26:06.390 00:26:06.390 ANA Group Identifier Maximum : 128 00:26:06.390 Number of ANA Group Identifiers : 128 00:26:06.390 Max Number of Allowed Namespaces : 1024 00:26:06.390 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:26:06.390 Command Effects Log Page: Supported 00:26:06.390 Get Log Page Extended Data: Supported 00:26:06.390 Telemetry Log Pages: Not Supported 00:26:06.390 Persistent Event Log Pages: Not Supported 00:26:06.390 Supported Log Pages Log Page: May Support 00:26:06.390 Commands Supported & Effects Log Page: Not Supported 00:26:06.390 Feature Identifiers & Effects Log Page:May Support 00:26:06.390 NVMe-MI Commands & Effects Log Page: May Support 00:26:06.390 Data Area 4 for Telemetry Log: Not Supported 00:26:06.390 Error Log Page Entries Supported: 128 00:26:06.390 Keep Alive: Supported 00:26:06.390 Keep Alive Granularity: 1000 ms 00:26:06.390 00:26:06.390 NVM Command Set Attributes 00:26:06.390 ========================== 00:26:06.390 Submission Queue Entry Size 00:26:06.390 Max: 64 00:26:06.390 Min: 64 00:26:06.390 Completion Queue Entry Size 00:26:06.390 Max: 16 00:26:06.390 Min: 16 00:26:06.390 Number of Namespaces: 1024 00:26:06.390 Compare Command: Not Supported 00:26:06.390 Write Uncorrectable Command: Not Supported 00:26:06.390 Dataset Management Command: Supported 00:26:06.390 Write Zeroes Command: Supported 00:26:06.390 Set Features Save Field: Not Supported 00:26:06.390 Reservations: Not Supported 00:26:06.390 Timestamp: Not Supported 00:26:06.390 Copy: Not Supported 00:26:06.390 Volatile Write Cache: Present 00:26:06.390 Atomic Write Unit (Normal): 1 00:26:06.390 Atomic Write Unit (PFail): 1 00:26:06.390 Atomic Compare & Write Unit: 1 00:26:06.390 Fused Compare & Write: Not Supported 00:26:06.390 Scatter-Gather List 00:26:06.390 SGL Command Set: Supported 00:26:06.390 SGL Keyed: Not Supported 00:26:06.390 SGL Bit Bucket Descriptor: Not Supported 00:26:06.390 SGL Metadata Pointer: Not Supported 00:26:06.390 Oversized SGL: Not Supported 00:26:06.390 SGL Metadata Address: Not Supported 00:26:06.390 SGL Offset: Supported 00:26:06.390 Transport SGL Data Block: Not Supported 00:26:06.390 Replay Protected Memory Block: Not Supported 00:26:06.390 00:26:06.390 Firmware Slot Information 00:26:06.390 ========================= 00:26:06.390 Active slot: 0 00:26:06.390 00:26:06.390 Asymmetric Namespace Access 00:26:06.390 =========================== 00:26:06.390 Change Count : 0 00:26:06.390 Number of ANA Group Descriptors : 1 00:26:06.390 ANA Group Descriptor : 0 00:26:06.390 ANA Group ID : 1 00:26:06.390 Number of NSID Values : 1 00:26:06.390 Change Count : 0 00:26:06.390 ANA State : 1 00:26:06.390 Namespace Identifier : 1 00:26:06.390 00:26:06.390 Commands Supported and Effects 00:26:06.390 ============================== 00:26:06.390 Admin Commands 00:26:06.390 -------------- 00:26:06.390 Get Log Page (02h): Supported 00:26:06.390 Identify (06h): Supported 00:26:06.390 Abort (08h): Supported 00:26:06.390 Set Features (09h): Supported 00:26:06.390 Get Features (0Ah): Supported 00:26:06.390 Asynchronous Event Request (0Ch): Supported 00:26:06.390 Keep Alive (18h): Supported 00:26:06.390 I/O Commands 00:26:06.390 ------------ 00:26:06.390 Flush (00h): Supported 00:26:06.390 Write (01h): Supported LBA-Change 00:26:06.390 Read (02h): Supported 00:26:06.390 Write Zeroes (08h): Supported LBA-Change 00:26:06.390 Dataset Management (09h): Supported 00:26:06.390 00:26:06.390 Error Log 00:26:06.390 ========= 00:26:06.390 Entry: 0 00:26:06.390 Error Count: 0x3 00:26:06.390 Submission Queue Id: 0x0 00:26:06.390 Command Id: 0x5 00:26:06.390 Phase Bit: 0 00:26:06.390 Status Code: 0x2 00:26:06.390 Status Code Type: 0x0 00:26:06.390 Do Not Retry: 1 00:26:06.390 Error Location: 0x28 00:26:06.390 LBA: 0x0 00:26:06.390 Namespace: 0x0 00:26:06.390 Vendor Log Page: 0x0 00:26:06.390 ----------- 00:26:06.390 Entry: 1 00:26:06.390 Error Count: 0x2 00:26:06.390 Submission Queue Id: 0x0 00:26:06.390 Command Id: 0x5 00:26:06.390 Phase Bit: 0 00:26:06.390 Status Code: 0x2 00:26:06.390 Status Code Type: 0x0 00:26:06.390 Do Not Retry: 1 00:26:06.390 Error Location: 0x28 00:26:06.390 LBA: 0x0 00:26:06.390 Namespace: 0x0 00:26:06.390 Vendor Log Page: 0x0 00:26:06.390 ----------- 00:26:06.390 Entry: 2 00:26:06.390 Error Count: 0x1 00:26:06.390 Submission Queue Id: 0x0 00:26:06.390 Command Id: 0x4 00:26:06.390 Phase Bit: 0 00:26:06.390 Status Code: 0x2 00:26:06.390 Status Code Type: 0x0 00:26:06.390 Do Not Retry: 1 00:26:06.390 Error Location: 0x28 00:26:06.390 LBA: 0x0 00:26:06.390 Namespace: 0x0 00:26:06.390 Vendor Log Page: 0x0 00:26:06.390 00:26:06.390 Number of Queues 00:26:06.390 ================ 00:26:06.390 Number of I/O Submission Queues: 128 00:26:06.390 Number of I/O Completion Queues: 128 00:26:06.390 00:26:06.390 ZNS Specific Controller Data 00:26:06.390 ============================ 00:26:06.390 Zone Append Size Limit: 0 00:26:06.390 00:26:06.390 00:26:06.390 Active Namespaces 00:26:06.390 ================= 00:26:06.390 get_feature(0x05) failed 00:26:06.390 Namespace ID:1 00:26:06.390 Command Set Identifier: NVM (00h) 00:26:06.390 Deallocate: Supported 00:26:06.390 Deallocated/Unwritten Error: Not Supported 00:26:06.390 Deallocated Read Value: Unknown 00:26:06.390 Deallocate in Write Zeroes: Not Supported 00:26:06.390 Deallocated Guard Field: 0xFFFF 00:26:06.390 Flush: Supported 00:26:06.390 Reservation: Not Supported 00:26:06.390 Namespace Sharing Capabilities: Multiple Controllers 00:26:06.390 Size (in LBAs): 1310720 (5GiB) 00:26:06.390 Capacity (in LBAs): 1310720 (5GiB) 00:26:06.390 Utilization (in LBAs): 1310720 (5GiB) 00:26:06.390 UUID: 74829792-4f37-4123-9a19-c01a049c54c5 00:26:06.390 Thin Provisioning: Not Supported 00:26:06.390 Per-NS Atomic Units: Yes 00:26:06.390 Atomic Boundary Size (Normal): 0 00:26:06.390 Atomic Boundary Size (PFail): 0 00:26:06.390 Atomic Boundary Offset: 0 00:26:06.390 NGUID/EUI64 Never Reused: No 00:26:06.390 ANA group ID: 1 00:26:06.390 Namespace Write Protected: No 00:26:06.390 Number of LBA Formats: 1 00:26:06.390 Current LBA Format: LBA Format #00 00:26:06.390 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:26:06.390 00:26:06.390 20:25:55 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:26:06.390 20:25:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:06.390 20:25:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:26:06.390 20:25:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:06.390 20:25:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:26:06.390 20:25:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:06.390 20:25:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:06.390 rmmod nvme_tcp 00:26:06.650 rmmod nvme_fabrics 00:26:06.650 20:25:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:06.650 20:25:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:26:06.650 20:25:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:26:06.650 20:25:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:26:06.650 20:25:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:06.650 20:25:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:06.650 20:25:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:06.650 20:25:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:06.650 20:25:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:06.650 20:25:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:06.650 20:25:55 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:06.650 20:25:55 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:06.650 20:25:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:26:06.650 20:25:55 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:26:06.650 20:25:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:26:06.650 20:25:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:26:06.650 20:25:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:06.650 20:25:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:06.650 20:25:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:06.650 20:25:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:06.650 20:25:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:26:06.650 20:25:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:26:06.650 20:25:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:26:07.218 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:07.478 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:26:07.478 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:26:07.478 00:26:07.478 real 0m2.910s 00:26:07.478 user 0m1.010s 00:26:07.478 sys 0m1.367s 00:26:07.478 20:25:56 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:07.478 20:25:56 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:07.478 ************************************ 00:26:07.478 END TEST nvmf_identify_kernel_target 00:26:07.478 ************************************ 00:26:07.478 20:25:56 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:07.478 20:25:56 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:26:07.478 20:25:56 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:07.478 20:25:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:07.478 ************************************ 00:26:07.478 START TEST nvmf_auth_host 00:26:07.478 ************************************ 00:26:07.478 20:25:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:07.738 * Looking for test storage... 00:26:07.738 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:07.738 20:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:07.738 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:26:07.738 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:07.738 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:07.738 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:07.738 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:07.739 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:07.739 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:07.739 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:07.739 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:07.739 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:07.739 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:07.739 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:26:07.739 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:26:07.739 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:07.739 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:07.739 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:07.739 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:07.739 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:07.739 20:25:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:07.739 20:25:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:07.739 20:25:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:07.739 20:25:56 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:07.739 20:25:56 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:07.739 20:25:56 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:07.739 20:25:56 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:26:07.739 20:25:56 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:07.739 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:26:07.739 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:07.739 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:07.739 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:07.739 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:07.739 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:07.739 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:07.739 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:07.739 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:07.739 20:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:26:07.739 20:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:26:07.739 20:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:26:07.739 20:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:26:07.739 20:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:07.739 20:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:07.739 20:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:26:07.739 20:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:26:07.739 20:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:26:07.739 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:07.739 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:07.739 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:07.739 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:07.739 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:07.739 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:07.739 20:25:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:07.739 20:25:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:07.739 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:26:07.739 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:26:07.739 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:26:07.739 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:26:07.739 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:26:07.739 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:26:07.739 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:07.739 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:07.739 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:07.739 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:26:07.739 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:07.739 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:07.739 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:07.739 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:07.739 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:07.739 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:07.739 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:07.739 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:07.739 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:26:07.739 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:26:07.739 Cannot find device "nvmf_tgt_br" 00:26:07.739 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # true 00:26:07.739 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:26:07.739 Cannot find device "nvmf_tgt_br2" 00:26:07.739 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # true 00:26:07.739 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:26:07.739 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:26:07.739 Cannot find device "nvmf_tgt_br" 00:26:07.739 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # true 00:26:07.739 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:26:07.739 Cannot find device "nvmf_tgt_br2" 00:26:07.739 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # true 00:26:07.739 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:26:07.739 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:26:07.739 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:07.739 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:07.739 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:26:07.739 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:07.739 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:07.739 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:26:07.739 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:26:07.739 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:07.739 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:07.999 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:07.999 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:07.999 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:07.999 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:07.999 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:07.999 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:07.999 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:26:07.999 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:26:07.999 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:26:07.999 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:26:07.999 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:07.999 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:07.999 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:07.999 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:26:07.999 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:26:07.999 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:26:07.999 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:07.999 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:07.999 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:07.999 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:07.999 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:26:07.999 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:07.999 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:26:07.999 00:26:07.999 --- 10.0.0.2 ping statistics --- 00:26:07.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:07.999 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:26:07.999 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:26:07.999 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:07.999 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:26:07.999 00:26:07.999 --- 10.0.0.3 ping statistics --- 00:26:07.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:07.999 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:26:07.999 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:07.999 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:07.999 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:26:07.999 00:26:07.999 --- 10.0.0.1 ping statistics --- 00:26:07.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:07.999 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:26:07.999 20:25:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:07.999 20:25:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@433 -- # return 0 00:26:07.999 20:25:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:07.999 20:25:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:07.999 20:25:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:07.999 20:25:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:07.999 20:25:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:07.999 20:25:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:07.999 20:25:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:07.999 20:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:26:07.999 20:25:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:07.999 20:25:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:26:07.999 20:25:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.999 20:25:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=110124 00:26:07.999 20:25:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:26:07.999 20:25:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 110124 00:26:07.999 20:25:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@827 -- # '[' -z 110124 ']' 00:26:07.999 20:25:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:07.999 20:25:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:07.999 20:25:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:07.999 20:25:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:07.999 20:25:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@860 -- # return 0 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=a1a3f259da2aa3445c1702005136a2a1 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.ZTd 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key a1a3f259da2aa3445c1702005136a2a1 0 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 a1a3f259da2aa3445c1702005136a2a1 0 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=a1a3f259da2aa3445c1702005136a2a1 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.ZTd 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.ZTd 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.ZTd 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=a7bd926faa28ea4ebc0f376c7a5ca43410a15f8199a7baacd5040635ba2c5193 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.SjU 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key a7bd926faa28ea4ebc0f376c7a5ca43410a15f8199a7baacd5040635ba2c5193 3 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 a7bd926faa28ea4ebc0f376c7a5ca43410a15f8199a7baacd5040635ba2c5193 3 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=a7bd926faa28ea4ebc0f376c7a5ca43410a15f8199a7baacd5040635ba2c5193 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.SjU 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.SjU 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.SjU 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=cb08201653b5eb9c796cfc866efc1f3e07b0648c3a6818fc 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.h66 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key cb08201653b5eb9c796cfc866efc1f3e07b0648c3a6818fc 0 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 cb08201653b5eb9c796cfc866efc1f3e07b0648c3a6818fc 0 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=cb08201653b5eb9c796cfc866efc1f3e07b0648c3a6818fc 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.h66 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.h66 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.h66 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ec665e44f8561b5b0af8132d73c14214b4c76ee27bdc8129 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Mun 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ec665e44f8561b5b0af8132d73c14214b4c76ee27bdc8129 2 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ec665e44f8561b5b0af8132d73c14214b4c76ee27bdc8129 2 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ec665e44f8561b5b0af8132d73c14214b4c76ee27bdc8129 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Mun 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Mun 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.Mun 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b776091f2053178b014860f4f83c2ca3 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.yku 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b776091f2053178b014860f4f83c2ca3 1 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b776091f2053178b014860f4f83c2ca3 1 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b776091f2053178b014860f4f83c2ca3 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:26:09.379 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:09.639 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.yku 00:26:09.639 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.yku 00:26:09.639 20:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.yku 00:26:09.639 20:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:26:09.639 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:09.639 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:09.639 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:09.639 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:26:09.639 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:26:09.639 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:09.639 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ca224a20cd7acb38b5eb10e10fe650da 00:26:09.639 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:26:09.639 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.oMb 00:26:09.639 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ca224a20cd7acb38b5eb10e10fe650da 1 00:26:09.639 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ca224a20cd7acb38b5eb10e10fe650da 1 00:26:09.639 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:09.639 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:09.639 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ca224a20cd7acb38b5eb10e10fe650da 00:26:09.639 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:26:09.639 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:09.639 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.oMb 00:26:09.639 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.oMb 00:26:09.639 20:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.oMb 00:26:09.639 20:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:26:09.639 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:09.639 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:09.639 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:09.639 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:26:09.639 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:26:09.639 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:09.639 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=350a04da83a9242b43e743c55f0488e410c33d0313a071de 00:26:09.639 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:26:09.639 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.OG5 00:26:09.639 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 350a04da83a9242b43e743c55f0488e410c33d0313a071de 2 00:26:09.639 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 350a04da83a9242b43e743c55f0488e410c33d0313a071de 2 00:26:09.639 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:09.639 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:09.639 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=350a04da83a9242b43e743c55f0488e410c33d0313a071de 00:26:09.639 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:26:09.639 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:09.639 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.OG5 00:26:09.639 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.OG5 00:26:09.639 20:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.OG5 00:26:09.639 20:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:26:09.639 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:09.639 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:09.639 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:09.639 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:26:09.639 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:26:09.640 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:09.640 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=61af81999a2125f4e8bfa5056c09d4a7 00:26:09.640 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:26:09.640 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.V1u 00:26:09.640 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 61af81999a2125f4e8bfa5056c09d4a7 0 00:26:09.640 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 61af81999a2125f4e8bfa5056c09d4a7 0 00:26:09.640 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:09.640 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:09.640 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=61af81999a2125f4e8bfa5056c09d4a7 00:26:09.640 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:26:09.640 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:09.640 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.V1u 00:26:09.640 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.V1u 00:26:09.640 20:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.V1u 00:26:09.640 20:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:26:09.640 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:09.640 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:09.640 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:09.640 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:26:09.640 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:26:09.640 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:26:09.640 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=042ebdbd54accc923f47fb5a0d01b2e928e9d884afb8200cbe3a3975d7f86433 00:26:09.640 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:26:09.899 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.8EM 00:26:09.899 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 042ebdbd54accc923f47fb5a0d01b2e928e9d884afb8200cbe3a3975d7f86433 3 00:26:09.899 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 042ebdbd54accc923f47fb5a0d01b2e928e9d884afb8200cbe3a3975d7f86433 3 00:26:09.899 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:09.899 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:09.899 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=042ebdbd54accc923f47fb5a0d01b2e928e9d884afb8200cbe3a3975d7f86433 00:26:09.899 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:26:09.899 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:09.899 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.8EM 00:26:09.899 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.8EM 00:26:09.899 20:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.8EM 00:26:09.899 20:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:26:09.899 20:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 110124 00:26:09.899 20:25:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@827 -- # '[' -z 110124 ']' 00:26:09.899 20:25:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:09.899 20:25:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:09.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:09.899 20:25:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:09.899 20:25:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:09.899 20:25:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.159 20:25:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:10.159 20:25:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@860 -- # return 0 00:26:10.159 20:25:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:10.159 20:25:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.ZTd 00:26:10.159 20:25:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.159 20:25:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.159 20:25:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.159 20:25:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.SjU ]] 00:26:10.159 20:25:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.SjU 00:26:10.159 20:25:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.159 20:25:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.159 20:25:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.159 20:25:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:10.159 20:25:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.h66 00:26:10.159 20:25:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.159 20:25:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.159 20:25:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.159 20:25:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.Mun ]] 00:26:10.159 20:25:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Mun 00:26:10.159 20:25:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.159 20:25:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.159 20:25:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.159 20:25:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:10.159 20:25:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.yku 00:26:10.159 20:25:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.159 20:25:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.159 20:25:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.159 20:25:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.oMb ]] 00:26:10.159 20:25:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.oMb 00:26:10.159 20:25:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.159 20:25:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.159 20:25:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.159 20:25:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:10.159 20:25:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.OG5 00:26:10.160 20:25:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.160 20:25:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.160 20:25:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.160 20:25:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.V1u ]] 00:26:10.160 20:25:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.V1u 00:26:10.160 20:25:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.160 20:25:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.160 20:25:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.160 20:25:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:10.160 20:25:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.8EM 00:26:10.160 20:25:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.160 20:25:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.160 20:25:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.160 20:25:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:26:10.160 20:25:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:26:10.160 20:25:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:26:10.160 20:25:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:10.160 20:25:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:10.160 20:25:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:10.160 20:25:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:10.160 20:25:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:10.160 20:25:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:10.160 20:25:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:10.160 20:25:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:10.160 20:25:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:10.160 20:25:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:10.160 20:25:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:26:10.160 20:25:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:26:10.160 20:25:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:26:10.160 20:25:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:10.160 20:25:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:10.160 20:25:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:10.160 20:25:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:26:10.160 20:25:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:26:10.160 20:25:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:26:10.160 20:25:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:10.160 20:25:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:26:10.728 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:10.728 Waiting for block devices as requested 00:26:10.728 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:26:10.728 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:26:11.297 20:26:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:26:11.297 20:26:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:11.297 20:26:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:26:11.297 20:26:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:26:11.297 20:26:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:11.297 20:26:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:26:11.297 20:26:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:26:11.297 20:26:00 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:26:11.297 20:26:00 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:26:11.297 No valid GPT data, bailing 00:26:11.297 20:26:00 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:11.557 20:26:00 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:26:11.557 20:26:00 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:26:11.557 20:26:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:26:11.557 20:26:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:26:11.557 20:26:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:26:11.557 20:26:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:26:11.557 20:26:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1658 -- # local device=nvme0n2 00:26:11.557 20:26:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:26:11.557 20:26:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:26:11.557 20:26:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:26:11.557 20:26:00 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:26:11.557 20:26:00 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:26:11.557 No valid GPT data, bailing 00:26:11.557 20:26:00 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:26:11.557 20:26:00 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:26:11.557 20:26:00 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:26:11.557 20:26:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:26:11.557 20:26:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:26:11.557 20:26:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:26:11.557 20:26:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:26:11.557 20:26:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1658 -- # local device=nvme0n3 00:26:11.557 20:26:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:26:11.557 20:26:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:26:11.557 20:26:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:26:11.557 20:26:00 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:26:11.557 20:26:00 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:26:11.557 No valid GPT data, bailing 00:26:11.557 20:26:00 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:26:11.557 20:26:00 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:26:11.557 20:26:00 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:26:11.557 20:26:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:26:11.557 20:26:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:26:11.557 20:26:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:26:11.557 20:26:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:26:11.557 20:26:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:26:11.557 20:26:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:26:11.557 20:26:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:26:11.557 20:26:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:26:11.557 20:26:00 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:26:11.557 20:26:00 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:26:11.557 No valid GPT data, bailing 00:26:11.557 20:26:00 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:26:11.557 20:26:00 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:26:11.557 20:26:00 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:26:11.557 20:26:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:26:11.557 20:26:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:26:11.557 20:26:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:11.557 20:26:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:11.557 20:26:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:11.816 20:26:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:26:11.816 20:26:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:26:11.816 20:26:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:26:11.816 20:26:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:26:11.816 20:26:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:26:11.816 20:26:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:26:11.816 20:26:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:26:11.816 20:26:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:26:11.816 20:26:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:11.816 20:26:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid=caa3dfc1-79db-49e7-95fe-b9f6785698c4 -a 10.0.0.1 -t tcp -s 4420 00:26:11.816 00:26:11.816 Discovery Log Number of Records 2, Generation counter 2 00:26:11.816 =====Discovery Log Entry 0====== 00:26:11.816 trtype: tcp 00:26:11.816 adrfam: ipv4 00:26:11.816 subtype: current discovery subsystem 00:26:11.816 treq: not specified, sq flow control disable supported 00:26:11.816 portid: 1 00:26:11.816 trsvcid: 4420 00:26:11.816 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:11.816 traddr: 10.0.0.1 00:26:11.816 eflags: none 00:26:11.816 sectype: none 00:26:11.816 =====Discovery Log Entry 1====== 00:26:11.816 trtype: tcp 00:26:11.816 adrfam: ipv4 00:26:11.816 subtype: nvme subsystem 00:26:11.816 treq: not specified, sq flow control disable supported 00:26:11.816 portid: 1 00:26:11.816 trsvcid: 4420 00:26:11.816 subnqn: nqn.2024-02.io.spdk:cnode0 00:26:11.816 traddr: 10.0.0.1 00:26:11.816 eflags: none 00:26:11.816 sectype: none 00:26:11.816 20:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:11.816 20:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:26:11.817 20:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:11.817 20:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:11.817 20:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:11.817 20:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:11.817 20:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:11.817 20:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:11.817 20:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2IwODIwMTY1M2I1ZWI5Yzc5NmNmYzg2NmVmYzFmM2UwN2IwNjQ4YzNhNjgxOGZjzI7IVw==: 00:26:11.817 20:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWM2NjVlNDRmODU2MWI1YjBhZjgxMzJkNzNjMTQyMTRiNGM3NmVlMjdiZGM4MTI5X6CybQ==: 00:26:11.817 20:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:11.817 20:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:11.817 20:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2IwODIwMTY1M2I1ZWI5Yzc5NmNmYzg2NmVmYzFmM2UwN2IwNjQ4YzNhNjgxOGZjzI7IVw==: 00:26:11.817 20:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWM2NjVlNDRmODU2MWI1YjBhZjgxMzJkNzNjMTQyMTRiNGM3NmVlMjdiZGM4MTI5X6CybQ==: ]] 00:26:11.817 20:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWM2NjVlNDRmODU2MWI1YjBhZjgxMzJkNzNjMTQyMTRiNGM3NmVlMjdiZGM4MTI5X6CybQ==: 00:26:11.817 20:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:26:11.817 20:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:26:11.817 20:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:26:11.817 20:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:11.817 20:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:26:11.817 20:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:11.817 20:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:26:11.817 20:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:11.817 20:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:11.817 20:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:11.817 20:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:11.817 20:26:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.817 20:26:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.817 20:26:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.817 20:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:11.817 20:26:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:11.817 20:26:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:11.817 20:26:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:11.817 20:26:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:11.817 20:26:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:11.817 20:26:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:11.817 20:26:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:11.817 20:26:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:11.817 20:26:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:11.817 20:26:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:11.817 20:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:11.817 20:26:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.817 20:26:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.076 nvme0n1 00:26:12.076 20:26:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.076 20:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:12.076 20:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:12.076 20:26:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.076 20:26:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.076 20:26:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.076 20:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:12.076 20:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:12.076 20:26:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.076 20:26:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.076 20:26:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.076 20:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:12.076 20:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:12.076 20:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:12.076 20:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:26:12.076 20:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:12.076 20:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:12.076 20:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:12.076 20:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:12.076 20:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTFhM2YyNTlkYTJhYTM0NDVjMTcwMjAwNTEzNmEyYTFKJx34: 00:26:12.076 20:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTdiZDkyNmZhYTI4ZWE0ZWJjMGYzNzZjN2E1Y2E0MzQxMGExNWY4MTk5YTdiYWFjZDUwNDA2MzViYTJjNTE5M6XJZHM=: 00:26:12.076 20:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:12.076 20:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:12.076 20:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTFhM2YyNTlkYTJhYTM0NDVjMTcwMjAwNTEzNmEyYTFKJx34: 00:26:12.076 20:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTdiZDkyNmZhYTI4ZWE0ZWJjMGYzNzZjN2E1Y2E0MzQxMGExNWY4MTk5YTdiYWFjZDUwNDA2MzViYTJjNTE5M6XJZHM=: ]] 00:26:12.076 20:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTdiZDkyNmZhYTI4ZWE0ZWJjMGYzNzZjN2E1Y2E0MzQxMGExNWY4MTk5YTdiYWFjZDUwNDA2MzViYTJjNTE5M6XJZHM=: 00:26:12.076 20:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:26:12.076 20:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:12.076 20:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:12.076 20:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:12.076 20:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:12.076 20:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:12.076 20:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:12.076 20:26:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.076 20:26:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.076 20:26:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.076 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:12.076 20:26:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:12.076 20:26:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:12.076 20:26:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:12.076 20:26:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:12.076 20:26:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:12.076 20:26:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:12.076 20:26:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:12.076 20:26:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:12.076 20:26:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:12.076 20:26:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:12.076 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:12.076 20:26:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.076 20:26:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.076 nvme0n1 00:26:12.076 20:26:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.076 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:12.076 20:26:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.076 20:26:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.076 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:12.076 20:26:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.076 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:12.076 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:12.335 20:26:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.335 20:26:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.335 20:26:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.335 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:12.335 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:12.335 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:12.335 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:12.335 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:12.335 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:12.335 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2IwODIwMTY1M2I1ZWI5Yzc5NmNmYzg2NmVmYzFmM2UwN2IwNjQ4YzNhNjgxOGZjzI7IVw==: 00:26:12.335 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWM2NjVlNDRmODU2MWI1YjBhZjgxMzJkNzNjMTQyMTRiNGM3NmVlMjdiZGM4MTI5X6CybQ==: 00:26:12.335 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:12.335 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:12.335 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2IwODIwMTY1M2I1ZWI5Yzc5NmNmYzg2NmVmYzFmM2UwN2IwNjQ4YzNhNjgxOGZjzI7IVw==: 00:26:12.335 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWM2NjVlNDRmODU2MWI1YjBhZjgxMzJkNzNjMTQyMTRiNGM3NmVlMjdiZGM4MTI5X6CybQ==: ]] 00:26:12.335 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWM2NjVlNDRmODU2MWI1YjBhZjgxMzJkNzNjMTQyMTRiNGM3NmVlMjdiZGM4MTI5X6CybQ==: 00:26:12.335 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:26:12.335 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:12.335 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:12.335 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:12.335 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:12.335 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:12.335 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:12.335 20:26:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.335 20:26:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.335 20:26:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.335 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:12.335 20:26:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:12.335 20:26:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:12.335 20:26:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:12.335 20:26:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:12.335 20:26:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:12.335 20:26:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:12.335 20:26:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:12.335 20:26:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:12.335 20:26:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:12.335 20:26:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:12.335 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:12.335 20:26:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.335 20:26:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.335 nvme0n1 00:26:12.335 20:26:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.335 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:12.335 20:26:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.335 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:12.335 20:26:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.335 20:26:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.335 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:12.335 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:12.335 20:26:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.335 20:26:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.335 20:26:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.335 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:12.335 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:12.335 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:12.335 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:12.335 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:12.335 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:12.335 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yjc3NjA5MWYyMDUzMTc4YjAxNDg2MGY0ZjgzYzJjYTMzStYr: 00:26:12.335 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2EyMjRhMjBjZDdhY2IzOGI1ZWIxMGUxMGZlNjUwZGE272zO: 00:26:12.335 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:12.335 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:12.335 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yjc3NjA5MWYyMDUzMTc4YjAxNDg2MGY0ZjgzYzJjYTMzStYr: 00:26:12.335 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2EyMjRhMjBjZDdhY2IzOGI1ZWIxMGUxMGZlNjUwZGE272zO: ]] 00:26:12.335 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2EyMjRhMjBjZDdhY2IzOGI1ZWIxMGUxMGZlNjUwZGE272zO: 00:26:12.335 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:26:12.335 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:12.335 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:12.335 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:12.336 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:12.336 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:12.336 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:12.336 20:26:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.336 20:26:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.336 20:26:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.336 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:12.336 20:26:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:12.336 20:26:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:12.336 20:26:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:12.336 20:26:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:12.336 20:26:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:12.336 20:26:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:12.336 20:26:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:12.336 20:26:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:12.336 20:26:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:12.336 20:26:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:12.336 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:12.336 20:26:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.336 20:26:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.594 nvme0n1 00:26:12.594 20:26:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.594 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:12.594 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:12.594 20:26:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.594 20:26:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.594 20:26:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.594 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:12.594 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:12.594 20:26:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.594 20:26:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.594 20:26:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.594 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:12.594 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:26:12.594 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:12.594 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:12.594 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:12.594 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:12.594 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzUwYTA0ZGE4M2E5MjQyYjQzZTc0M2M1NWYwNDg4ZTQxMGMzM2QwMzEzYTA3MWRlk7BzGA==: 00:26:12.594 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjFhZjgxOTk5YTIxMjVmNGU4YmZhNTA1NmMwOWQ0YTdi5pLD: 00:26:12.594 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:12.594 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:12.594 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzUwYTA0ZGE4M2E5MjQyYjQzZTc0M2M1NWYwNDg4ZTQxMGMzM2QwMzEzYTA3MWRlk7BzGA==: 00:26:12.594 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjFhZjgxOTk5YTIxMjVmNGU4YmZhNTA1NmMwOWQ0YTdi5pLD: ]] 00:26:12.594 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjFhZjgxOTk5YTIxMjVmNGU4YmZhNTA1NmMwOWQ0YTdi5pLD: 00:26:12.594 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:26:12.594 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:12.594 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:12.594 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:12.594 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:12.594 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:12.594 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:12.594 20:26:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.594 20:26:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.594 20:26:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.594 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:12.594 20:26:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:12.594 20:26:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:12.594 20:26:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:12.595 20:26:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:12.595 20:26:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:12.595 20:26:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:12.595 20:26:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:12.595 20:26:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:12.595 20:26:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:12.595 20:26:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:12.595 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:12.595 20:26:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.595 20:26:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.595 nvme0n1 00:26:12.595 20:26:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.595 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:12.595 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:12.595 20:26:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.595 20:26:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.595 20:26:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.852 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:12.852 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:12.852 20:26:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.852 20:26:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.852 20:26:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.852 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:12.852 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:26:12.852 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:12.852 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:12.852 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:12.852 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:12.852 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDQyZWJkYmQ1NGFjY2M5MjNmNDdmYjVhMGQwMWIyZTkyOGU5ZDg4NGFmYjgyMDBjYmUzYTM5NzVkN2Y4NjQzM804Wm8=: 00:26:12.852 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:12.852 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:12.852 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:12.852 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDQyZWJkYmQ1NGFjY2M5MjNmNDdmYjVhMGQwMWIyZTkyOGU5ZDg4NGFmYjgyMDBjYmUzYTM5NzVkN2Y4NjQzM804Wm8=: 00:26:12.852 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:12.852 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:26:12.852 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:12.852 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:12.852 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:12.852 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:12.852 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:12.852 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:12.852 20:26:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.852 20:26:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.852 20:26:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.852 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:12.852 20:26:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:12.852 20:26:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:12.852 20:26:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:12.852 20:26:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:12.852 20:26:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:12.852 20:26:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:12.852 20:26:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:12.852 20:26:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:12.852 20:26:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:12.852 20:26:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:12.852 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:12.852 20:26:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.852 20:26:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.852 nvme0n1 00:26:12.852 20:26:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.852 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:12.852 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:12.852 20:26:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.852 20:26:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.852 20:26:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.852 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:12.852 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:12.852 20:26:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.852 20:26:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.852 20:26:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.852 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:12.852 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:12.852 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:26:12.852 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:12.852 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:12.852 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:12.852 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:12.853 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTFhM2YyNTlkYTJhYTM0NDVjMTcwMjAwNTEzNmEyYTFKJx34: 00:26:12.853 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTdiZDkyNmZhYTI4ZWE0ZWJjMGYzNzZjN2E1Y2E0MzQxMGExNWY4MTk5YTdiYWFjZDUwNDA2MzViYTJjNTE5M6XJZHM=: 00:26:12.853 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:12.853 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:13.111 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTFhM2YyNTlkYTJhYTM0NDVjMTcwMjAwNTEzNmEyYTFKJx34: 00:26:13.111 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTdiZDkyNmZhYTI4ZWE0ZWJjMGYzNzZjN2E1Y2E0MzQxMGExNWY4MTk5YTdiYWFjZDUwNDA2MzViYTJjNTE5M6XJZHM=: ]] 00:26:13.111 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTdiZDkyNmZhYTI4ZWE0ZWJjMGYzNzZjN2E1Y2E0MzQxMGExNWY4MTk5YTdiYWFjZDUwNDA2MzViYTJjNTE5M6XJZHM=: 00:26:13.111 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:26:13.111 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:13.111 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:13.111 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:13.111 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:13.111 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:13.111 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:13.111 20:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.111 20:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.111 20:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.111 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:13.111 20:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:13.111 20:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:13.111 20:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:13.111 20:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:13.111 20:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:13.111 20:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:13.111 20:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:13.111 20:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:13.111 20:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:13.111 20:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:13.111 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:13.111 20:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.111 20:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.370 nvme0n1 00:26:13.370 20:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.370 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:13.370 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:13.370 20:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.370 20:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.370 20:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.370 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:13.370 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:13.370 20:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.370 20:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.370 20:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.370 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:13.370 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:26:13.370 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:13.370 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:13.370 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:13.370 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:13.370 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2IwODIwMTY1M2I1ZWI5Yzc5NmNmYzg2NmVmYzFmM2UwN2IwNjQ4YzNhNjgxOGZjzI7IVw==: 00:26:13.370 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWM2NjVlNDRmODU2MWI1YjBhZjgxMzJkNzNjMTQyMTRiNGM3NmVlMjdiZGM4MTI5X6CybQ==: 00:26:13.370 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:13.370 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:13.370 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2IwODIwMTY1M2I1ZWI5Yzc5NmNmYzg2NmVmYzFmM2UwN2IwNjQ4YzNhNjgxOGZjzI7IVw==: 00:26:13.370 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWM2NjVlNDRmODU2MWI1YjBhZjgxMzJkNzNjMTQyMTRiNGM3NmVlMjdiZGM4MTI5X6CybQ==: ]] 00:26:13.370 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWM2NjVlNDRmODU2MWI1YjBhZjgxMzJkNzNjMTQyMTRiNGM3NmVlMjdiZGM4MTI5X6CybQ==: 00:26:13.370 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:26:13.370 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:13.370 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:13.370 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:13.370 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:13.370 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:13.370 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:13.370 20:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.370 20:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.370 20:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.370 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:13.370 20:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:13.370 20:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:13.370 20:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:13.370 20:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:13.370 20:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:13.370 20:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:13.370 20:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:13.370 20:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:13.370 20:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:13.370 20:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:13.371 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:13.371 20:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.371 20:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.371 nvme0n1 00:26:13.371 20:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.371 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:13.371 20:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.371 20:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.371 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:13.629 20:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.629 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:13.629 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:13.629 20:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.629 20:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.629 20:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.629 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:13.629 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:26:13.629 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:13.629 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:13.629 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:13.629 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:13.629 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yjc3NjA5MWYyMDUzMTc4YjAxNDg2MGY0ZjgzYzJjYTMzStYr: 00:26:13.629 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2EyMjRhMjBjZDdhY2IzOGI1ZWIxMGUxMGZlNjUwZGE272zO: 00:26:13.629 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:13.629 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:13.629 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yjc3NjA5MWYyMDUzMTc4YjAxNDg2MGY0ZjgzYzJjYTMzStYr: 00:26:13.629 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2EyMjRhMjBjZDdhY2IzOGI1ZWIxMGUxMGZlNjUwZGE272zO: ]] 00:26:13.629 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2EyMjRhMjBjZDdhY2IzOGI1ZWIxMGUxMGZlNjUwZGE272zO: 00:26:13.629 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:26:13.629 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:13.629 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:13.629 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:13.629 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:13.629 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:13.629 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:13.629 20:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.629 20:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.629 20:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.629 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:13.629 20:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:13.629 20:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:13.629 20:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:13.629 20:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:13.629 20:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:13.629 20:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:13.629 20:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:13.629 20:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:13.629 20:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:13.629 20:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:13.629 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:13.629 20:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.629 20:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.629 nvme0n1 00:26:13.629 20:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.629 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:13.629 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:13.629 20:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.629 20:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.629 20:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.629 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:13.629 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:13.629 20:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.629 20:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.887 20:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.887 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:13.887 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:26:13.887 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:13.887 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:13.887 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:13.887 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:13.887 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzUwYTA0ZGE4M2E5MjQyYjQzZTc0M2M1NWYwNDg4ZTQxMGMzM2QwMzEzYTA3MWRlk7BzGA==: 00:26:13.888 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjFhZjgxOTk5YTIxMjVmNGU4YmZhNTA1NmMwOWQ0YTdi5pLD: 00:26:13.888 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:13.888 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:13.888 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzUwYTA0ZGE4M2E5MjQyYjQzZTc0M2M1NWYwNDg4ZTQxMGMzM2QwMzEzYTA3MWRlk7BzGA==: 00:26:13.888 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjFhZjgxOTk5YTIxMjVmNGU4YmZhNTA1NmMwOWQ0YTdi5pLD: ]] 00:26:13.888 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjFhZjgxOTk5YTIxMjVmNGU4YmZhNTA1NmMwOWQ0YTdi5pLD: 00:26:13.888 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:26:13.888 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:13.888 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:13.888 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:13.888 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:13.888 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:13.888 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:13.888 20:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.888 20:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.888 20:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.888 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:13.888 20:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:13.888 20:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:13.888 20:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:13.888 20:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:13.888 20:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:13.888 20:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:13.888 20:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:13.888 20:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:13.888 20:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:13.888 20:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:13.888 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:13.888 20:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.888 20:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.888 nvme0n1 00:26:13.888 20:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.888 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:13.888 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:13.888 20:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.888 20:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.888 20:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.888 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:13.888 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:13.888 20:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.888 20:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.888 20:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.888 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:13.888 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:26:13.888 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:13.888 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:13.888 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:13.888 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:13.888 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDQyZWJkYmQ1NGFjY2M5MjNmNDdmYjVhMGQwMWIyZTkyOGU5ZDg4NGFmYjgyMDBjYmUzYTM5NzVkN2Y4NjQzM804Wm8=: 00:26:13.888 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:13.888 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:13.888 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:13.888 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDQyZWJkYmQ1NGFjY2M5MjNmNDdmYjVhMGQwMWIyZTkyOGU5ZDg4NGFmYjgyMDBjYmUzYTM5NzVkN2Y4NjQzM804Wm8=: 00:26:13.888 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:13.888 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:26:13.888 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:13.888 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:13.888 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:13.888 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:13.888 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:13.888 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:13.888 20:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.888 20:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.888 20:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.888 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:13.888 20:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:13.888 20:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:13.888 20:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:13.888 20:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:13.888 20:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:13.888 20:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:13.888 20:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:13.888 20:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:13.888 20:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:13.888 20:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:13.888 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:13.888 20:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.888 20:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.146 nvme0n1 00:26:14.146 20:26:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.146 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:14.146 20:26:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.146 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:14.146 20:26:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.146 20:26:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.146 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:14.146 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:14.146 20:26:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.146 20:26:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.146 20:26:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.146 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:14.146 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:14.146 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:26:14.146 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:14.146 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:14.146 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:14.146 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:14.147 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTFhM2YyNTlkYTJhYTM0NDVjMTcwMjAwNTEzNmEyYTFKJx34: 00:26:14.147 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTdiZDkyNmZhYTI4ZWE0ZWJjMGYzNzZjN2E1Y2E0MzQxMGExNWY4MTk5YTdiYWFjZDUwNDA2MzViYTJjNTE5M6XJZHM=: 00:26:14.147 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:14.147 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:14.713 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTFhM2YyNTlkYTJhYTM0NDVjMTcwMjAwNTEzNmEyYTFKJx34: 00:26:14.713 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTdiZDkyNmZhYTI4ZWE0ZWJjMGYzNzZjN2E1Y2E0MzQxMGExNWY4MTk5YTdiYWFjZDUwNDA2MzViYTJjNTE5M6XJZHM=: ]] 00:26:14.713 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTdiZDkyNmZhYTI4ZWE0ZWJjMGYzNzZjN2E1Y2E0MzQxMGExNWY4MTk5YTdiYWFjZDUwNDA2MzViYTJjNTE5M6XJZHM=: 00:26:14.713 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:26:14.713 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:14.713 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:14.713 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:14.713 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:14.713 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:14.713 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:14.713 20:26:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.713 20:26:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.713 20:26:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.713 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:14.713 20:26:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:14.713 20:26:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:14.713 20:26:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:14.713 20:26:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:14.713 20:26:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:14.713 20:26:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:14.713 20:26:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:14.713 20:26:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:14.713 20:26:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:14.713 20:26:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:14.713 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:14.713 20:26:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.713 20:26:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.972 nvme0n1 00:26:14.972 20:26:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.972 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:14.972 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:14.972 20:26:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.972 20:26:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.972 20:26:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.972 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:14.972 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:14.972 20:26:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.972 20:26:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.972 20:26:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.972 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:14.972 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:26:14.972 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:14.972 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:14.972 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:14.972 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:14.972 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2IwODIwMTY1M2I1ZWI5Yzc5NmNmYzg2NmVmYzFmM2UwN2IwNjQ4YzNhNjgxOGZjzI7IVw==: 00:26:14.972 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWM2NjVlNDRmODU2MWI1YjBhZjgxMzJkNzNjMTQyMTRiNGM3NmVlMjdiZGM4MTI5X6CybQ==: 00:26:14.972 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:14.972 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:14.972 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2IwODIwMTY1M2I1ZWI5Yzc5NmNmYzg2NmVmYzFmM2UwN2IwNjQ4YzNhNjgxOGZjzI7IVw==: 00:26:14.972 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWM2NjVlNDRmODU2MWI1YjBhZjgxMzJkNzNjMTQyMTRiNGM3NmVlMjdiZGM4MTI5X6CybQ==: ]] 00:26:14.972 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWM2NjVlNDRmODU2MWI1YjBhZjgxMzJkNzNjMTQyMTRiNGM3NmVlMjdiZGM4MTI5X6CybQ==: 00:26:14.972 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:26:14.972 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:14.972 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:14.972 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:14.972 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:14.972 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:14.972 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:14.972 20:26:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.972 20:26:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.972 20:26:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.972 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:14.972 20:26:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:14.972 20:26:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:14.972 20:26:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:14.972 20:26:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:14.972 20:26:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:14.972 20:26:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:14.972 20:26:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:14.972 20:26:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:14.972 20:26:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:14.972 20:26:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:14.972 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:14.972 20:26:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.972 20:26:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.249 nvme0n1 00:26:15.249 20:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.249 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:15.249 20:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.249 20:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.249 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:15.249 20:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.249 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:15.249 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:15.249 20:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.249 20:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.249 20:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.249 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:15.249 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:26:15.249 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:15.249 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:15.249 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:15.249 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:15.249 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yjc3NjA5MWYyMDUzMTc4YjAxNDg2MGY0ZjgzYzJjYTMzStYr: 00:26:15.249 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2EyMjRhMjBjZDdhY2IzOGI1ZWIxMGUxMGZlNjUwZGE272zO: 00:26:15.249 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:15.249 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:15.249 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yjc3NjA5MWYyMDUzMTc4YjAxNDg2MGY0ZjgzYzJjYTMzStYr: 00:26:15.249 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2EyMjRhMjBjZDdhY2IzOGI1ZWIxMGUxMGZlNjUwZGE272zO: ]] 00:26:15.249 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2EyMjRhMjBjZDdhY2IzOGI1ZWIxMGUxMGZlNjUwZGE272zO: 00:26:15.249 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:26:15.249 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:15.249 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:15.249 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:15.249 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:15.249 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:15.249 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:15.249 20:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.249 20:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.249 20:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.249 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:15.249 20:26:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:15.249 20:26:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:15.249 20:26:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:15.249 20:26:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:15.249 20:26:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:15.249 20:26:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:15.249 20:26:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:15.249 20:26:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:15.249 20:26:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:15.249 20:26:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:15.249 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:15.249 20:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.249 20:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.508 nvme0n1 00:26:15.508 20:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.508 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:15.508 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:15.508 20:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.508 20:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.508 20:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.508 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:15.508 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:15.508 20:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.508 20:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.508 20:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.508 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:15.508 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:26:15.508 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:15.508 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:15.508 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:15.508 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:15.508 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzUwYTA0ZGE4M2E5MjQyYjQzZTc0M2M1NWYwNDg4ZTQxMGMzM2QwMzEzYTA3MWRlk7BzGA==: 00:26:15.508 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjFhZjgxOTk5YTIxMjVmNGU4YmZhNTA1NmMwOWQ0YTdi5pLD: 00:26:15.509 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:15.509 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:15.509 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzUwYTA0ZGE4M2E5MjQyYjQzZTc0M2M1NWYwNDg4ZTQxMGMzM2QwMzEzYTA3MWRlk7BzGA==: 00:26:15.509 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjFhZjgxOTk5YTIxMjVmNGU4YmZhNTA1NmMwOWQ0YTdi5pLD: ]] 00:26:15.509 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjFhZjgxOTk5YTIxMjVmNGU4YmZhNTA1NmMwOWQ0YTdi5pLD: 00:26:15.509 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:26:15.509 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:15.509 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:15.509 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:15.509 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:15.509 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:15.509 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:15.509 20:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.509 20:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.509 20:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.509 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:15.509 20:26:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:15.509 20:26:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:15.509 20:26:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:15.509 20:26:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:15.509 20:26:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:15.509 20:26:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:15.509 20:26:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:15.509 20:26:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:15.509 20:26:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:15.509 20:26:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:15.509 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:15.509 20:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.509 20:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.767 nvme0n1 00:26:15.767 20:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.767 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:15.767 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:15.767 20:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.767 20:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.767 20:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.767 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:15.767 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:15.767 20:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.767 20:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.767 20:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.767 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:15.767 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:26:15.767 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:15.767 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:15.767 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:15.767 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:15.767 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDQyZWJkYmQ1NGFjY2M5MjNmNDdmYjVhMGQwMWIyZTkyOGU5ZDg4NGFmYjgyMDBjYmUzYTM5NzVkN2Y4NjQzM804Wm8=: 00:26:15.767 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:15.767 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:15.767 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:15.767 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDQyZWJkYmQ1NGFjY2M5MjNmNDdmYjVhMGQwMWIyZTkyOGU5ZDg4NGFmYjgyMDBjYmUzYTM5NzVkN2Y4NjQzM804Wm8=: 00:26:15.767 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:15.768 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:26:15.768 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:15.768 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:15.768 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:15.768 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:15.768 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:15.768 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:15.768 20:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.768 20:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.768 20:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.768 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:15.768 20:26:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:15.768 20:26:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:15.768 20:26:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:15.768 20:26:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:15.768 20:26:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:15.768 20:26:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:15.768 20:26:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:15.768 20:26:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:15.768 20:26:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:15.768 20:26:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:15.768 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:15.768 20:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.768 20:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.026 nvme0n1 00:26:16.026 20:26:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.026 20:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:16.026 20:26:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.026 20:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:16.026 20:26:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.026 20:26:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.026 20:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:16.026 20:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:16.026 20:26:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.026 20:26:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.026 20:26:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.026 20:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:16.026 20:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:16.026 20:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:26:16.026 20:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:16.026 20:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:16.026 20:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:16.026 20:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:16.026 20:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTFhM2YyNTlkYTJhYTM0NDVjMTcwMjAwNTEzNmEyYTFKJx34: 00:26:16.026 20:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTdiZDkyNmZhYTI4ZWE0ZWJjMGYzNzZjN2E1Y2E0MzQxMGExNWY4MTk5YTdiYWFjZDUwNDA2MzViYTJjNTE5M6XJZHM=: 00:26:16.026 20:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:16.026 20:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:17.927 20:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTFhM2YyNTlkYTJhYTM0NDVjMTcwMjAwNTEzNmEyYTFKJx34: 00:26:17.927 20:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTdiZDkyNmZhYTI4ZWE0ZWJjMGYzNzZjN2E1Y2E0MzQxMGExNWY4MTk5YTdiYWFjZDUwNDA2MzViYTJjNTE5M6XJZHM=: ]] 00:26:17.927 20:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTdiZDkyNmZhYTI4ZWE0ZWJjMGYzNzZjN2E1Y2E0MzQxMGExNWY4MTk5YTdiYWFjZDUwNDA2MzViYTJjNTE5M6XJZHM=: 00:26:17.927 20:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:26:17.927 20:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:17.927 20:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:17.927 20:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:17.927 20:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:17.927 20:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:17.927 20:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:17.927 20:26:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.927 20:26:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.927 20:26:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.927 20:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:17.927 20:26:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:17.927 20:26:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:17.927 20:26:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:17.927 20:26:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:17.927 20:26:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:17.927 20:26:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:17.927 20:26:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:17.927 20:26:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:17.927 20:26:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:17.927 20:26:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:17.927 20:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:17.927 20:26:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.927 20:26:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.927 nvme0n1 00:26:17.927 20:26:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.927 20:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:17.927 20:26:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.927 20:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:17.927 20:26:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.927 20:26:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.189 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:18.189 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:18.189 20:26:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.189 20:26:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.189 20:26:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.189 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:18.189 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:26:18.189 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:18.189 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:18.189 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:18.189 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:18.189 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2IwODIwMTY1M2I1ZWI5Yzc5NmNmYzg2NmVmYzFmM2UwN2IwNjQ4YzNhNjgxOGZjzI7IVw==: 00:26:18.189 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWM2NjVlNDRmODU2MWI1YjBhZjgxMzJkNzNjMTQyMTRiNGM3NmVlMjdiZGM4MTI5X6CybQ==: 00:26:18.189 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:18.189 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:18.189 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2IwODIwMTY1M2I1ZWI5Yzc5NmNmYzg2NmVmYzFmM2UwN2IwNjQ4YzNhNjgxOGZjzI7IVw==: 00:26:18.189 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWM2NjVlNDRmODU2MWI1YjBhZjgxMzJkNzNjMTQyMTRiNGM3NmVlMjdiZGM4MTI5X6CybQ==: ]] 00:26:18.189 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWM2NjVlNDRmODU2MWI1YjBhZjgxMzJkNzNjMTQyMTRiNGM3NmVlMjdiZGM4MTI5X6CybQ==: 00:26:18.189 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:26:18.189 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:18.189 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:18.189 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:18.189 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:18.189 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:18.189 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:18.189 20:26:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.189 20:26:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.189 20:26:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.189 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:18.189 20:26:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:18.189 20:26:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:18.189 20:26:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:18.189 20:26:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:18.189 20:26:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:18.189 20:26:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:18.189 20:26:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:18.189 20:26:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:18.189 20:26:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:18.189 20:26:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:18.189 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:18.189 20:26:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.189 20:26:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.460 nvme0n1 00:26:18.460 20:26:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.460 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:18.460 20:26:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.460 20:26:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.460 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:18.460 20:26:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.460 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:18.460 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:18.460 20:26:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.460 20:26:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.460 20:26:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.460 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:18.460 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:26:18.460 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:18.460 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:18.460 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:18.460 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:18.460 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yjc3NjA5MWYyMDUzMTc4YjAxNDg2MGY0ZjgzYzJjYTMzStYr: 00:26:18.460 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2EyMjRhMjBjZDdhY2IzOGI1ZWIxMGUxMGZlNjUwZGE272zO: 00:26:18.460 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:18.460 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:18.460 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yjc3NjA5MWYyMDUzMTc4YjAxNDg2MGY0ZjgzYzJjYTMzStYr: 00:26:18.460 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2EyMjRhMjBjZDdhY2IzOGI1ZWIxMGUxMGZlNjUwZGE272zO: ]] 00:26:18.460 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2EyMjRhMjBjZDdhY2IzOGI1ZWIxMGUxMGZlNjUwZGE272zO: 00:26:18.460 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:26:18.460 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:18.460 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:18.460 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:18.460 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:18.460 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:18.460 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:18.460 20:26:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.460 20:26:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.460 20:26:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.460 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:18.460 20:26:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:18.460 20:26:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:18.460 20:26:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:18.460 20:26:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:18.460 20:26:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:18.460 20:26:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:18.460 20:26:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:18.460 20:26:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:18.460 20:26:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:18.460 20:26:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:18.460 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:18.460 20:26:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.460 20:26:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.739 nvme0n1 00:26:18.739 20:26:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.739 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:18.739 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:18.739 20:26:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.739 20:26:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.739 20:26:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.018 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:19.018 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:19.018 20:26:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.018 20:26:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.018 20:26:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.018 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:19.018 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:26:19.018 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:19.018 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:19.018 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:19.018 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:19.018 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzUwYTA0ZGE4M2E5MjQyYjQzZTc0M2M1NWYwNDg4ZTQxMGMzM2QwMzEzYTA3MWRlk7BzGA==: 00:26:19.018 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjFhZjgxOTk5YTIxMjVmNGU4YmZhNTA1NmMwOWQ0YTdi5pLD: 00:26:19.018 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:19.018 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:19.018 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzUwYTA0ZGE4M2E5MjQyYjQzZTc0M2M1NWYwNDg4ZTQxMGMzM2QwMzEzYTA3MWRlk7BzGA==: 00:26:19.018 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjFhZjgxOTk5YTIxMjVmNGU4YmZhNTA1NmMwOWQ0YTdi5pLD: ]] 00:26:19.018 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjFhZjgxOTk5YTIxMjVmNGU4YmZhNTA1NmMwOWQ0YTdi5pLD: 00:26:19.018 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:26:19.018 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:19.018 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:19.018 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:19.018 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:19.018 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:19.018 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:19.018 20:26:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.018 20:26:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.018 20:26:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.018 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:19.018 20:26:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:19.018 20:26:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:19.018 20:26:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:19.018 20:26:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:19.018 20:26:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:19.018 20:26:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:19.018 20:26:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:19.018 20:26:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:19.018 20:26:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:19.018 20:26:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:19.018 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:19.018 20:26:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.018 20:26:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.291 nvme0n1 00:26:19.291 20:26:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.291 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:19.291 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:19.291 20:26:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.291 20:26:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.291 20:26:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.291 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:19.291 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:19.291 20:26:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.291 20:26:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.291 20:26:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.291 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:19.291 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:26:19.291 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:19.291 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:19.291 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:19.291 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:19.291 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDQyZWJkYmQ1NGFjY2M5MjNmNDdmYjVhMGQwMWIyZTkyOGU5ZDg4NGFmYjgyMDBjYmUzYTM5NzVkN2Y4NjQzM804Wm8=: 00:26:19.291 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:19.291 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:19.291 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:19.291 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDQyZWJkYmQ1NGFjY2M5MjNmNDdmYjVhMGQwMWIyZTkyOGU5ZDg4NGFmYjgyMDBjYmUzYTM5NzVkN2Y4NjQzM804Wm8=: 00:26:19.291 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:19.291 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:26:19.291 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:19.291 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:19.291 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:19.291 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:19.291 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:19.291 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:19.291 20:26:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.291 20:26:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.291 20:26:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.291 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:19.291 20:26:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:19.291 20:26:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:19.291 20:26:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:19.291 20:26:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:19.291 20:26:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:19.291 20:26:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:19.291 20:26:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:19.291 20:26:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:19.291 20:26:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:19.291 20:26:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:19.291 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:19.291 20:26:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.291 20:26:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.550 nvme0n1 00:26:19.550 20:26:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.550 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:19.550 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:19.550 20:26:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.550 20:26:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.550 20:26:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.809 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:19.809 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:19.809 20:26:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.809 20:26:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.809 20:26:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.809 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:19.809 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:19.809 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:26:19.809 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:19.809 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:19.809 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:19.809 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:19.809 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTFhM2YyNTlkYTJhYTM0NDVjMTcwMjAwNTEzNmEyYTFKJx34: 00:26:19.809 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTdiZDkyNmZhYTI4ZWE0ZWJjMGYzNzZjN2E1Y2E0MzQxMGExNWY4MTk5YTdiYWFjZDUwNDA2MzViYTJjNTE5M6XJZHM=: 00:26:19.809 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:19.809 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:19.809 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTFhM2YyNTlkYTJhYTM0NDVjMTcwMjAwNTEzNmEyYTFKJx34: 00:26:19.809 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTdiZDkyNmZhYTI4ZWE0ZWJjMGYzNzZjN2E1Y2E0MzQxMGExNWY4MTk5YTdiYWFjZDUwNDA2MzViYTJjNTE5M6XJZHM=: ]] 00:26:19.809 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTdiZDkyNmZhYTI4ZWE0ZWJjMGYzNzZjN2E1Y2E0MzQxMGExNWY4MTk5YTdiYWFjZDUwNDA2MzViYTJjNTE5M6XJZHM=: 00:26:19.809 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:26:19.809 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:19.809 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:19.809 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:19.809 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:19.809 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:19.809 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:19.809 20:26:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.809 20:26:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.809 20:26:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.809 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:19.809 20:26:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:19.809 20:26:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:19.809 20:26:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:19.809 20:26:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:19.809 20:26:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:19.809 20:26:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:19.809 20:26:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:19.809 20:26:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:19.809 20:26:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:19.809 20:26:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:19.809 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:19.809 20:26:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.809 20:26:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.377 nvme0n1 00:26:20.377 20:26:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.377 20:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:20.377 20:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:20.377 20:26:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.377 20:26:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.377 20:26:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.377 20:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:20.377 20:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:20.377 20:26:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.377 20:26:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.377 20:26:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.377 20:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:20.377 20:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:26:20.377 20:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:20.377 20:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:20.377 20:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:20.377 20:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:20.377 20:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2IwODIwMTY1M2I1ZWI5Yzc5NmNmYzg2NmVmYzFmM2UwN2IwNjQ4YzNhNjgxOGZjzI7IVw==: 00:26:20.377 20:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWM2NjVlNDRmODU2MWI1YjBhZjgxMzJkNzNjMTQyMTRiNGM3NmVlMjdiZGM4MTI5X6CybQ==: 00:26:20.377 20:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:20.377 20:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:20.377 20:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2IwODIwMTY1M2I1ZWI5Yzc5NmNmYzg2NmVmYzFmM2UwN2IwNjQ4YzNhNjgxOGZjzI7IVw==: 00:26:20.377 20:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWM2NjVlNDRmODU2MWI1YjBhZjgxMzJkNzNjMTQyMTRiNGM3NmVlMjdiZGM4MTI5X6CybQ==: ]] 00:26:20.377 20:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWM2NjVlNDRmODU2MWI1YjBhZjgxMzJkNzNjMTQyMTRiNGM3NmVlMjdiZGM4MTI5X6CybQ==: 00:26:20.377 20:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:26:20.377 20:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:20.377 20:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:20.377 20:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:20.377 20:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:20.377 20:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:20.377 20:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:20.377 20:26:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.377 20:26:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.377 20:26:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.377 20:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:20.377 20:26:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:20.377 20:26:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:20.377 20:26:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:20.377 20:26:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:20.377 20:26:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:20.377 20:26:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:20.377 20:26:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:20.377 20:26:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:20.377 20:26:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:20.377 20:26:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:20.377 20:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:20.377 20:26:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.377 20:26:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.944 nvme0n1 00:26:20.944 20:26:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.944 20:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:20.944 20:26:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.944 20:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:20.944 20:26:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.944 20:26:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.944 20:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:20.944 20:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:20.944 20:26:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.944 20:26:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.944 20:26:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.944 20:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:20.944 20:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:26:20.944 20:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:20.944 20:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:20.944 20:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:20.944 20:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:20.944 20:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yjc3NjA5MWYyMDUzMTc4YjAxNDg2MGY0ZjgzYzJjYTMzStYr: 00:26:20.944 20:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2EyMjRhMjBjZDdhY2IzOGI1ZWIxMGUxMGZlNjUwZGE272zO: 00:26:20.944 20:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:20.944 20:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:20.944 20:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yjc3NjA5MWYyMDUzMTc4YjAxNDg2MGY0ZjgzYzJjYTMzStYr: 00:26:20.944 20:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2EyMjRhMjBjZDdhY2IzOGI1ZWIxMGUxMGZlNjUwZGE272zO: ]] 00:26:20.944 20:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2EyMjRhMjBjZDdhY2IzOGI1ZWIxMGUxMGZlNjUwZGE272zO: 00:26:20.944 20:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:26:20.944 20:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:20.944 20:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:20.944 20:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:20.944 20:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:20.944 20:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:20.944 20:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:20.944 20:26:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.944 20:26:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.944 20:26:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.944 20:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:20.944 20:26:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:20.944 20:26:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:20.944 20:26:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:20.944 20:26:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:20.944 20:26:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:20.944 20:26:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:20.944 20:26:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:20.944 20:26:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:20.944 20:26:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:20.944 20:26:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:20.944 20:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:20.944 20:26:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.944 20:26:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.511 nvme0n1 00:26:21.511 20:26:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.511 20:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:21.511 20:26:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.511 20:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:21.511 20:26:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.511 20:26:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.511 20:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:21.511 20:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:21.511 20:26:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.512 20:26:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.512 20:26:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.512 20:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:21.512 20:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:26:21.512 20:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:21.512 20:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:21.512 20:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:21.512 20:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:21.512 20:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzUwYTA0ZGE4M2E5MjQyYjQzZTc0M2M1NWYwNDg4ZTQxMGMzM2QwMzEzYTA3MWRlk7BzGA==: 00:26:21.512 20:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjFhZjgxOTk5YTIxMjVmNGU4YmZhNTA1NmMwOWQ0YTdi5pLD: 00:26:21.512 20:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:21.512 20:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:21.512 20:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzUwYTA0ZGE4M2E5MjQyYjQzZTc0M2M1NWYwNDg4ZTQxMGMzM2QwMzEzYTA3MWRlk7BzGA==: 00:26:21.512 20:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjFhZjgxOTk5YTIxMjVmNGU4YmZhNTA1NmMwOWQ0YTdi5pLD: ]] 00:26:21.512 20:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjFhZjgxOTk5YTIxMjVmNGU4YmZhNTA1NmMwOWQ0YTdi5pLD: 00:26:21.512 20:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:26:21.512 20:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:21.512 20:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:21.512 20:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:21.512 20:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:21.512 20:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:21.512 20:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:21.512 20:26:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.512 20:26:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.512 20:26:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.512 20:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:21.512 20:26:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:21.512 20:26:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:21.512 20:26:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:21.512 20:26:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:21.512 20:26:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:21.512 20:26:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:21.512 20:26:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:21.512 20:26:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:21.512 20:26:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:21.512 20:26:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:21.512 20:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:21.512 20:26:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.512 20:26:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.079 nvme0n1 00:26:22.079 20:26:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.079 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:22.079 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:22.079 20:26:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.079 20:26:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.079 20:26:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.079 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:22.079 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:22.079 20:26:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.079 20:26:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.079 20:26:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.079 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:22.079 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:26:22.079 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:22.079 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:22.079 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:22.079 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:22.079 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDQyZWJkYmQ1NGFjY2M5MjNmNDdmYjVhMGQwMWIyZTkyOGU5ZDg4NGFmYjgyMDBjYmUzYTM5NzVkN2Y4NjQzM804Wm8=: 00:26:22.079 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:22.079 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:22.079 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:22.079 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDQyZWJkYmQ1NGFjY2M5MjNmNDdmYjVhMGQwMWIyZTkyOGU5ZDg4NGFmYjgyMDBjYmUzYTM5NzVkN2Y4NjQzM804Wm8=: 00:26:22.079 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:22.079 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:26:22.079 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:22.079 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:22.079 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:22.079 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:22.079 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:22.079 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:22.079 20:26:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.079 20:26:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.079 20:26:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.079 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:22.079 20:26:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:22.079 20:26:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:22.080 20:26:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:22.080 20:26:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:22.080 20:26:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:22.080 20:26:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:22.080 20:26:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:22.080 20:26:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:22.080 20:26:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:22.080 20:26:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:22.080 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:22.080 20:26:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.080 20:26:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.647 nvme0n1 00:26:22.647 20:26:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.647 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:22.647 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:22.647 20:26:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.647 20:26:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.647 20:26:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.647 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:22.647 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:22.647 20:26:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.647 20:26:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.647 20:26:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.647 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:22.647 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:22.647 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:22.647 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:26:22.647 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:22.647 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:22.647 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:22.647 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:22.647 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTFhM2YyNTlkYTJhYTM0NDVjMTcwMjAwNTEzNmEyYTFKJx34: 00:26:22.647 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTdiZDkyNmZhYTI4ZWE0ZWJjMGYzNzZjN2E1Y2E0MzQxMGExNWY4MTk5YTdiYWFjZDUwNDA2MzViYTJjNTE5M6XJZHM=: 00:26:22.647 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:22.647 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:22.647 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTFhM2YyNTlkYTJhYTM0NDVjMTcwMjAwNTEzNmEyYTFKJx34: 00:26:22.647 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTdiZDkyNmZhYTI4ZWE0ZWJjMGYzNzZjN2E1Y2E0MzQxMGExNWY4MTk5YTdiYWFjZDUwNDA2MzViYTJjNTE5M6XJZHM=: ]] 00:26:22.647 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTdiZDkyNmZhYTI4ZWE0ZWJjMGYzNzZjN2E1Y2E0MzQxMGExNWY4MTk5YTdiYWFjZDUwNDA2MzViYTJjNTE5M6XJZHM=: 00:26:22.647 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:26:22.647 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:22.647 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:22.647 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:22.647 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:22.647 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:22.647 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:22.647 20:26:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.647 20:26:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.906 20:26:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.906 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:22.906 20:26:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:22.906 20:26:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:22.906 20:26:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:22.906 20:26:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:22.906 20:26:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:22.906 20:26:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:22.906 20:26:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:22.907 20:26:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:22.907 20:26:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:22.907 20:26:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:22.907 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:22.907 20:26:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.907 20:26:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.907 nvme0n1 00:26:22.907 20:26:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.907 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:22.907 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:22.907 20:26:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.907 20:26:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.907 20:26:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.907 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:22.907 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:22.907 20:26:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.907 20:26:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.907 20:26:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.907 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:22.907 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:26:22.907 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:22.907 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:22.907 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:22.907 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:22.907 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2IwODIwMTY1M2I1ZWI5Yzc5NmNmYzg2NmVmYzFmM2UwN2IwNjQ4YzNhNjgxOGZjzI7IVw==: 00:26:22.907 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWM2NjVlNDRmODU2MWI1YjBhZjgxMzJkNzNjMTQyMTRiNGM3NmVlMjdiZGM4MTI5X6CybQ==: 00:26:22.907 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:22.907 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:22.907 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2IwODIwMTY1M2I1ZWI5Yzc5NmNmYzg2NmVmYzFmM2UwN2IwNjQ4YzNhNjgxOGZjzI7IVw==: 00:26:22.907 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWM2NjVlNDRmODU2MWI1YjBhZjgxMzJkNzNjMTQyMTRiNGM3NmVlMjdiZGM4MTI5X6CybQ==: ]] 00:26:22.907 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWM2NjVlNDRmODU2MWI1YjBhZjgxMzJkNzNjMTQyMTRiNGM3NmVlMjdiZGM4MTI5X6CybQ==: 00:26:22.907 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:26:22.907 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:22.907 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:22.907 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:22.907 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:22.907 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:22.907 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:22.907 20:26:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.907 20:26:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.907 20:26:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.907 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:22.907 20:26:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:22.907 20:26:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:22.907 20:26:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:22.907 20:26:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:22.907 20:26:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:22.907 20:26:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:22.907 20:26:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:22.907 20:26:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:22.907 20:26:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:22.907 20:26:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:22.907 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:22.907 20:26:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.907 20:26:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.166 nvme0n1 00:26:23.166 20:26:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.166 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:23.166 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:23.166 20:26:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.166 20:26:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.166 20:26:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.166 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:23.166 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:23.166 20:26:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.166 20:26:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.166 20:26:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.166 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:23.166 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:26:23.166 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:23.166 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:23.166 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:23.166 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:23.166 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yjc3NjA5MWYyMDUzMTc4YjAxNDg2MGY0ZjgzYzJjYTMzStYr: 00:26:23.166 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2EyMjRhMjBjZDdhY2IzOGI1ZWIxMGUxMGZlNjUwZGE272zO: 00:26:23.166 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:23.166 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:23.166 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yjc3NjA5MWYyMDUzMTc4YjAxNDg2MGY0ZjgzYzJjYTMzStYr: 00:26:23.166 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2EyMjRhMjBjZDdhY2IzOGI1ZWIxMGUxMGZlNjUwZGE272zO: ]] 00:26:23.166 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2EyMjRhMjBjZDdhY2IzOGI1ZWIxMGUxMGZlNjUwZGE272zO: 00:26:23.166 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:26:23.166 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:23.166 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:23.166 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:23.166 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:23.166 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:23.166 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:23.166 20:26:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.166 20:26:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.166 20:26:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.166 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:23.166 20:26:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:23.166 20:26:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:23.166 20:26:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:23.166 20:26:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:23.166 20:26:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:23.166 20:26:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:23.166 20:26:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:23.166 20:26:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:23.166 20:26:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:23.166 20:26:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:23.166 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:23.166 20:26:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.166 20:26:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.166 nvme0n1 00:26:23.166 20:26:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.166 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:23.166 20:26:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.166 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:23.166 20:26:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.166 20:26:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.166 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:23.166 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:23.166 20:26:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.166 20:26:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.424 20:26:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.424 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:23.424 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:26:23.424 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:23.424 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:23.424 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:23.424 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:23.424 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzUwYTA0ZGE4M2E5MjQyYjQzZTc0M2M1NWYwNDg4ZTQxMGMzM2QwMzEzYTA3MWRlk7BzGA==: 00:26:23.424 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjFhZjgxOTk5YTIxMjVmNGU4YmZhNTA1NmMwOWQ0YTdi5pLD: 00:26:23.424 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:23.424 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:23.424 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzUwYTA0ZGE4M2E5MjQyYjQzZTc0M2M1NWYwNDg4ZTQxMGMzM2QwMzEzYTA3MWRlk7BzGA==: 00:26:23.424 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjFhZjgxOTk5YTIxMjVmNGU4YmZhNTA1NmMwOWQ0YTdi5pLD: ]] 00:26:23.424 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjFhZjgxOTk5YTIxMjVmNGU4YmZhNTA1NmMwOWQ0YTdi5pLD: 00:26:23.424 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:26:23.424 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:23.424 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:23.424 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:23.424 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:23.424 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:23.424 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:23.424 20:26:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.424 20:26:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.424 20:26:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.424 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:23.424 20:26:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:23.424 20:26:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:23.424 20:26:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:23.424 20:26:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:23.424 20:26:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:23.424 20:26:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:23.424 20:26:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:23.424 20:26:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:23.424 20:26:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:23.424 20:26:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:23.424 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:23.424 20:26:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.424 20:26:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.424 nvme0n1 00:26:23.424 20:26:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.424 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:23.424 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:23.424 20:26:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.424 20:26:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.424 20:26:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.424 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:23.424 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:23.424 20:26:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.424 20:26:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.424 20:26:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.424 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:23.424 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:26:23.424 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:23.424 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:23.424 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:23.424 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:23.424 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDQyZWJkYmQ1NGFjY2M5MjNmNDdmYjVhMGQwMWIyZTkyOGU5ZDg4NGFmYjgyMDBjYmUzYTM5NzVkN2Y4NjQzM804Wm8=: 00:26:23.424 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:23.424 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:23.425 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:23.425 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDQyZWJkYmQ1NGFjY2M5MjNmNDdmYjVhMGQwMWIyZTkyOGU5ZDg4NGFmYjgyMDBjYmUzYTM5NzVkN2Y4NjQzM804Wm8=: 00:26:23.425 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:23.425 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:26:23.425 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:23.425 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:23.425 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:23.425 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:23.425 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:23.425 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:23.425 20:26:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.425 20:26:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.425 20:26:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.425 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:23.425 20:26:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:23.425 20:26:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:23.425 20:26:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:23.425 20:26:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:23.425 20:26:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:23.425 20:26:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:23.425 20:26:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:23.425 20:26:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:23.425 20:26:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:23.425 20:26:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:23.425 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:23.425 20:26:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.425 20:26:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.683 nvme0n1 00:26:23.683 20:26:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.683 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:23.683 20:26:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.683 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:23.683 20:26:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.683 20:26:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.683 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:23.683 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:23.683 20:26:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.683 20:26:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.683 20:26:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.683 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:23.683 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:23.683 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:26:23.683 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:23.683 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:23.683 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:23.683 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:23.683 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTFhM2YyNTlkYTJhYTM0NDVjMTcwMjAwNTEzNmEyYTFKJx34: 00:26:23.683 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTdiZDkyNmZhYTI4ZWE0ZWJjMGYzNzZjN2E1Y2E0MzQxMGExNWY4MTk5YTdiYWFjZDUwNDA2MzViYTJjNTE5M6XJZHM=: 00:26:23.683 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:23.683 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:23.683 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTFhM2YyNTlkYTJhYTM0NDVjMTcwMjAwNTEzNmEyYTFKJx34: 00:26:23.683 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTdiZDkyNmZhYTI4ZWE0ZWJjMGYzNzZjN2E1Y2E0MzQxMGExNWY4MTk5YTdiYWFjZDUwNDA2MzViYTJjNTE5M6XJZHM=: ]] 00:26:23.683 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTdiZDkyNmZhYTI4ZWE0ZWJjMGYzNzZjN2E1Y2E0MzQxMGExNWY4MTk5YTdiYWFjZDUwNDA2MzViYTJjNTE5M6XJZHM=: 00:26:23.683 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:26:23.683 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:23.683 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:23.683 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:23.683 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:23.683 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:23.683 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:23.683 20:26:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.683 20:26:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.683 20:26:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.683 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:23.683 20:26:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:23.683 20:26:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:23.683 20:26:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:23.683 20:26:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:23.683 20:26:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:23.683 20:26:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:23.683 20:26:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:23.683 20:26:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:23.683 20:26:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:23.683 20:26:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:23.683 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:23.683 20:26:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.683 20:26:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.683 nvme0n1 00:26:23.683 20:26:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.683 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:23.683 20:26:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.683 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:23.683 20:26:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.683 20:26:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.942 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:23.942 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:23.942 20:26:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.942 20:26:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.942 20:26:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.942 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:23.942 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:26:23.942 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:23.942 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:23.942 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:23.942 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:23.942 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2IwODIwMTY1M2I1ZWI5Yzc5NmNmYzg2NmVmYzFmM2UwN2IwNjQ4YzNhNjgxOGZjzI7IVw==: 00:26:23.942 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWM2NjVlNDRmODU2MWI1YjBhZjgxMzJkNzNjMTQyMTRiNGM3NmVlMjdiZGM4MTI5X6CybQ==: 00:26:23.942 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:23.942 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:23.942 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2IwODIwMTY1M2I1ZWI5Yzc5NmNmYzg2NmVmYzFmM2UwN2IwNjQ4YzNhNjgxOGZjzI7IVw==: 00:26:23.942 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWM2NjVlNDRmODU2MWI1YjBhZjgxMzJkNzNjMTQyMTRiNGM3NmVlMjdiZGM4MTI5X6CybQ==: ]] 00:26:23.942 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWM2NjVlNDRmODU2MWI1YjBhZjgxMzJkNzNjMTQyMTRiNGM3NmVlMjdiZGM4MTI5X6CybQ==: 00:26:23.942 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:26:23.942 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:23.942 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:23.942 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:23.942 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:23.942 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:23.942 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:23.942 20:26:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.942 20:26:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.942 20:26:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.942 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:23.942 20:26:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:23.942 20:26:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:23.942 20:26:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:23.942 20:26:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:23.942 20:26:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:23.942 20:26:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:23.942 20:26:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:23.942 20:26:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:23.942 20:26:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:23.942 20:26:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:23.942 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:23.942 20:26:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.942 20:26:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.942 nvme0n1 00:26:23.942 20:26:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.942 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:23.942 20:26:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.942 20:26:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.942 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:23.942 20:26:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.942 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:23.942 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:23.942 20:26:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.942 20:26:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.942 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.942 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:23.942 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:26:23.942 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:23.942 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:23.942 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:23.942 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:23.942 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yjc3NjA5MWYyMDUzMTc4YjAxNDg2MGY0ZjgzYzJjYTMzStYr: 00:26:23.942 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2EyMjRhMjBjZDdhY2IzOGI1ZWIxMGUxMGZlNjUwZGE272zO: 00:26:23.942 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:23.942 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:23.942 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yjc3NjA5MWYyMDUzMTc4YjAxNDg2MGY0ZjgzYzJjYTMzStYr: 00:26:23.942 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2EyMjRhMjBjZDdhY2IzOGI1ZWIxMGUxMGZlNjUwZGE272zO: ]] 00:26:23.942 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2EyMjRhMjBjZDdhY2IzOGI1ZWIxMGUxMGZlNjUwZGE272zO: 00:26:23.942 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:26:23.942 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:23.942 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:23.942 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:23.942 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:23.942 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:23.942 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:23.942 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.942 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.942 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.211 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:24.211 20:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:24.211 20:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:24.211 20:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:24.211 20:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:24.211 20:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:24.211 20:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:24.211 20:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:24.211 20:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:24.211 20:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:24.211 20:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:24.211 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:24.211 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.211 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.211 nvme0n1 00:26:24.211 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.211 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:24.211 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.211 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.211 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:24.211 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.211 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:24.211 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:24.211 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.211 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.211 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.211 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:24.211 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:26:24.211 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:24.211 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:24.211 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:24.211 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:24.211 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzUwYTA0ZGE4M2E5MjQyYjQzZTc0M2M1NWYwNDg4ZTQxMGMzM2QwMzEzYTA3MWRlk7BzGA==: 00:26:24.211 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjFhZjgxOTk5YTIxMjVmNGU4YmZhNTA1NmMwOWQ0YTdi5pLD: 00:26:24.211 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:24.211 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:24.211 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzUwYTA0ZGE4M2E5MjQyYjQzZTc0M2M1NWYwNDg4ZTQxMGMzM2QwMzEzYTA3MWRlk7BzGA==: 00:26:24.211 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjFhZjgxOTk5YTIxMjVmNGU4YmZhNTA1NmMwOWQ0YTdi5pLD: ]] 00:26:24.211 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjFhZjgxOTk5YTIxMjVmNGU4YmZhNTA1NmMwOWQ0YTdi5pLD: 00:26:24.211 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:26:24.211 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:24.211 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:24.211 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:24.211 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:24.211 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:24.211 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:24.211 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.211 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.211 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.211 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:24.211 20:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:24.211 20:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:24.211 20:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:24.211 20:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:24.211 20:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:24.211 20:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:24.211 20:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:24.211 20:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:24.211 20:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:24.211 20:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:24.211 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:24.211 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.211 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.475 nvme0n1 00:26:24.475 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.475 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:24.475 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:24.475 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.475 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.475 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.475 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:24.475 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:24.475 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.475 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.475 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.475 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:24.475 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:26:24.475 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:24.475 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:24.475 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:24.475 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:24.475 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDQyZWJkYmQ1NGFjY2M5MjNmNDdmYjVhMGQwMWIyZTkyOGU5ZDg4NGFmYjgyMDBjYmUzYTM5NzVkN2Y4NjQzM804Wm8=: 00:26:24.475 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:24.475 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:24.475 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:24.475 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDQyZWJkYmQ1NGFjY2M5MjNmNDdmYjVhMGQwMWIyZTkyOGU5ZDg4NGFmYjgyMDBjYmUzYTM5NzVkN2Y4NjQzM804Wm8=: 00:26:24.475 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:24.475 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:26:24.475 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:24.475 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:24.475 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:24.475 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:24.475 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:24.475 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:24.475 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.475 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.475 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.475 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:24.475 20:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:24.475 20:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:24.475 20:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:24.475 20:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:24.475 20:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:24.475 20:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:24.475 20:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:24.475 20:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:24.475 20:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:24.475 20:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:24.475 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:24.475 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.475 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.475 nvme0n1 00:26:24.475 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.475 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:24.475 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:24.475 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.475 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.475 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.733 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:24.733 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:24.733 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.733 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.733 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.733 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:24.733 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:24.733 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:26:24.733 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:24.733 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:24.733 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:24.733 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:24.733 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTFhM2YyNTlkYTJhYTM0NDVjMTcwMjAwNTEzNmEyYTFKJx34: 00:26:24.733 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTdiZDkyNmZhYTI4ZWE0ZWJjMGYzNzZjN2E1Y2E0MzQxMGExNWY4MTk5YTdiYWFjZDUwNDA2MzViYTJjNTE5M6XJZHM=: 00:26:24.733 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:24.733 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:24.733 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTFhM2YyNTlkYTJhYTM0NDVjMTcwMjAwNTEzNmEyYTFKJx34: 00:26:24.733 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTdiZDkyNmZhYTI4ZWE0ZWJjMGYzNzZjN2E1Y2E0MzQxMGExNWY4MTk5YTdiYWFjZDUwNDA2MzViYTJjNTE5M6XJZHM=: ]] 00:26:24.733 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTdiZDkyNmZhYTI4ZWE0ZWJjMGYzNzZjN2E1Y2E0MzQxMGExNWY4MTk5YTdiYWFjZDUwNDA2MzViYTJjNTE5M6XJZHM=: 00:26:24.733 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:26:24.733 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:24.733 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:24.733 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:24.733 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:24.733 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:24.733 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:24.733 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.733 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.733 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.733 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:24.733 20:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:24.733 20:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:24.733 20:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:24.733 20:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:24.733 20:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:24.733 20:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:24.733 20:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:24.733 20:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:24.733 20:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:24.733 20:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:24.733 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:24.733 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.733 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.733 nvme0n1 00:26:24.734 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.734 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:24.734 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:24.734 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.734 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.992 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.992 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:24.992 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:24.992 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.992 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.992 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.992 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:24.992 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:26:24.992 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:24.992 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:24.992 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:24.992 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:24.992 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2IwODIwMTY1M2I1ZWI5Yzc5NmNmYzg2NmVmYzFmM2UwN2IwNjQ4YzNhNjgxOGZjzI7IVw==: 00:26:24.992 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWM2NjVlNDRmODU2MWI1YjBhZjgxMzJkNzNjMTQyMTRiNGM3NmVlMjdiZGM4MTI5X6CybQ==: 00:26:24.992 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:24.992 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:24.992 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2IwODIwMTY1M2I1ZWI5Yzc5NmNmYzg2NmVmYzFmM2UwN2IwNjQ4YzNhNjgxOGZjzI7IVw==: 00:26:24.992 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWM2NjVlNDRmODU2MWI1YjBhZjgxMzJkNzNjMTQyMTRiNGM3NmVlMjdiZGM4MTI5X6CybQ==: ]] 00:26:24.992 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWM2NjVlNDRmODU2MWI1YjBhZjgxMzJkNzNjMTQyMTRiNGM3NmVlMjdiZGM4MTI5X6CybQ==: 00:26:24.992 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:26:24.992 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:24.992 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:24.992 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:24.992 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:24.992 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:24.992 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:24.992 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.992 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.992 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.992 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:24.992 20:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:24.992 20:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:24.992 20:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:24.992 20:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:24.992 20:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:24.992 20:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:24.992 20:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:24.992 20:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:24.992 20:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:24.992 20:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:24.992 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:24.992 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.992 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.992 nvme0n1 00:26:24.992 20:26:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.992 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:24.992 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:24.992 20:26:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.992 20:26:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.250 20:26:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.250 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:25.250 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:25.250 20:26:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.250 20:26:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.250 20:26:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.250 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:25.250 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:26:25.250 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:25.250 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:25.250 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:25.250 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:25.250 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yjc3NjA5MWYyMDUzMTc4YjAxNDg2MGY0ZjgzYzJjYTMzStYr: 00:26:25.250 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2EyMjRhMjBjZDdhY2IzOGI1ZWIxMGUxMGZlNjUwZGE272zO: 00:26:25.250 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:25.250 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:25.250 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yjc3NjA5MWYyMDUzMTc4YjAxNDg2MGY0ZjgzYzJjYTMzStYr: 00:26:25.250 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2EyMjRhMjBjZDdhY2IzOGI1ZWIxMGUxMGZlNjUwZGE272zO: ]] 00:26:25.250 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2EyMjRhMjBjZDdhY2IzOGI1ZWIxMGUxMGZlNjUwZGE272zO: 00:26:25.250 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:26:25.250 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:25.250 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:25.250 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:25.250 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:25.250 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:25.250 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:25.250 20:26:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.250 20:26:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.250 20:26:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.250 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:25.250 20:26:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:25.250 20:26:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:25.250 20:26:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:25.250 20:26:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:25.250 20:26:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:25.250 20:26:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:25.250 20:26:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:25.250 20:26:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:25.250 20:26:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:25.250 20:26:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:25.250 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:25.250 20:26:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.250 20:26:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.250 nvme0n1 00:26:25.250 20:26:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.250 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:25.250 20:26:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.250 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:25.250 20:26:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.509 20:26:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.509 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:25.509 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:25.509 20:26:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.509 20:26:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.509 20:26:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.509 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:25.509 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:26:25.509 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:25.509 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:25.509 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:25.509 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:25.509 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzUwYTA0ZGE4M2E5MjQyYjQzZTc0M2M1NWYwNDg4ZTQxMGMzM2QwMzEzYTA3MWRlk7BzGA==: 00:26:25.509 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjFhZjgxOTk5YTIxMjVmNGU4YmZhNTA1NmMwOWQ0YTdi5pLD: 00:26:25.509 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:25.509 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:25.509 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzUwYTA0ZGE4M2E5MjQyYjQzZTc0M2M1NWYwNDg4ZTQxMGMzM2QwMzEzYTA3MWRlk7BzGA==: 00:26:25.509 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjFhZjgxOTk5YTIxMjVmNGU4YmZhNTA1NmMwOWQ0YTdi5pLD: ]] 00:26:25.509 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjFhZjgxOTk5YTIxMjVmNGU4YmZhNTA1NmMwOWQ0YTdi5pLD: 00:26:25.509 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:26:25.509 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:25.509 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:25.509 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:25.509 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:25.509 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:25.509 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:25.509 20:26:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.509 20:26:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.509 20:26:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.509 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:25.509 20:26:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:25.509 20:26:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:25.509 20:26:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:25.509 20:26:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:25.509 20:26:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:25.509 20:26:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:25.509 20:26:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:25.509 20:26:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:25.509 20:26:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:25.509 20:26:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:25.509 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:25.509 20:26:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.509 20:26:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.509 nvme0n1 00:26:25.509 20:26:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.509 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:25.509 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:25.509 20:26:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.509 20:26:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.767 20:26:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.767 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:25.767 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:25.767 20:26:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.767 20:26:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.767 20:26:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.767 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:25.767 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:26:25.767 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:25.767 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:25.767 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:25.767 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:25.767 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDQyZWJkYmQ1NGFjY2M5MjNmNDdmYjVhMGQwMWIyZTkyOGU5ZDg4NGFmYjgyMDBjYmUzYTM5NzVkN2Y4NjQzM804Wm8=: 00:26:25.767 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:25.767 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:25.767 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:25.767 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDQyZWJkYmQ1NGFjY2M5MjNmNDdmYjVhMGQwMWIyZTkyOGU5ZDg4NGFmYjgyMDBjYmUzYTM5NzVkN2Y4NjQzM804Wm8=: 00:26:25.767 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:25.767 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:26:25.767 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:25.767 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:25.767 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:25.767 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:25.767 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:25.767 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:25.767 20:26:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.767 20:26:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.767 20:26:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.767 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:25.767 20:26:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:25.767 20:26:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:25.767 20:26:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:25.767 20:26:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:25.768 20:26:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:25.768 20:26:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:25.768 20:26:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:25.768 20:26:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:25.768 20:26:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:25.768 20:26:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:25.768 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:25.768 20:26:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.768 20:26:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.026 nvme0n1 00:26:26.026 20:26:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.026 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:26.026 20:26:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.026 20:26:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.026 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:26.026 20:26:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.026 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:26.026 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:26.026 20:26:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.027 20:26:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.027 20:26:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.027 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:26.027 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:26.027 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:26:26.027 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:26.027 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:26.027 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:26.027 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:26.027 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTFhM2YyNTlkYTJhYTM0NDVjMTcwMjAwNTEzNmEyYTFKJx34: 00:26:26.027 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTdiZDkyNmZhYTI4ZWE0ZWJjMGYzNzZjN2E1Y2E0MzQxMGExNWY4MTk5YTdiYWFjZDUwNDA2MzViYTJjNTE5M6XJZHM=: 00:26:26.027 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:26.027 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:26.027 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTFhM2YyNTlkYTJhYTM0NDVjMTcwMjAwNTEzNmEyYTFKJx34: 00:26:26.027 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTdiZDkyNmZhYTI4ZWE0ZWJjMGYzNzZjN2E1Y2E0MzQxMGExNWY4MTk5YTdiYWFjZDUwNDA2MzViYTJjNTE5M6XJZHM=: ]] 00:26:26.027 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTdiZDkyNmZhYTI4ZWE0ZWJjMGYzNzZjN2E1Y2E0MzQxMGExNWY4MTk5YTdiYWFjZDUwNDA2MzViYTJjNTE5M6XJZHM=: 00:26:26.027 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:26:26.027 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:26.027 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:26.027 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:26.027 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:26.027 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:26.027 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:26.027 20:26:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.027 20:26:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.027 20:26:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.027 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:26.027 20:26:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:26.027 20:26:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:26.027 20:26:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:26.027 20:26:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:26.027 20:26:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:26.027 20:26:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:26.027 20:26:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:26.027 20:26:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:26.027 20:26:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:26.027 20:26:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:26.027 20:26:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:26.027 20:26:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.027 20:26:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.286 nvme0n1 00:26:26.286 20:26:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.286 20:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:26.286 20:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:26.286 20:26:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.286 20:26:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.286 20:26:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.286 20:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:26.286 20:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:26.286 20:26:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.286 20:26:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.286 20:26:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.286 20:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:26.286 20:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:26:26.286 20:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:26.286 20:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:26.286 20:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:26.286 20:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:26.286 20:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2IwODIwMTY1M2I1ZWI5Yzc5NmNmYzg2NmVmYzFmM2UwN2IwNjQ4YzNhNjgxOGZjzI7IVw==: 00:26:26.286 20:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWM2NjVlNDRmODU2MWI1YjBhZjgxMzJkNzNjMTQyMTRiNGM3NmVlMjdiZGM4MTI5X6CybQ==: 00:26:26.286 20:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:26.286 20:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:26.286 20:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2IwODIwMTY1M2I1ZWI5Yzc5NmNmYzg2NmVmYzFmM2UwN2IwNjQ4YzNhNjgxOGZjzI7IVw==: 00:26:26.286 20:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWM2NjVlNDRmODU2MWI1YjBhZjgxMzJkNzNjMTQyMTRiNGM3NmVlMjdiZGM4MTI5X6CybQ==: ]] 00:26:26.286 20:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWM2NjVlNDRmODU2MWI1YjBhZjgxMzJkNzNjMTQyMTRiNGM3NmVlMjdiZGM4MTI5X6CybQ==: 00:26:26.286 20:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:26:26.286 20:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:26.286 20:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:26.286 20:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:26.286 20:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:26.286 20:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:26.286 20:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:26.286 20:26:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.286 20:26:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.286 20:26:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.286 20:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:26.286 20:26:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:26.286 20:26:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:26.286 20:26:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:26.286 20:26:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:26.286 20:26:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:26.286 20:26:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:26.286 20:26:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:26.286 20:26:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:26.286 20:26:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:26.286 20:26:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:26.286 20:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:26.286 20:26:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.286 20:26:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.854 nvme0n1 00:26:26.854 20:26:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.854 20:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:26.854 20:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:26.854 20:26:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.854 20:26:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.854 20:26:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.854 20:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:26.854 20:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:26.854 20:26:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.854 20:26:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.854 20:26:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.854 20:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:26.854 20:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:26:26.854 20:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:26.854 20:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:26.854 20:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:26.854 20:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:26.854 20:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yjc3NjA5MWYyMDUzMTc4YjAxNDg2MGY0ZjgzYzJjYTMzStYr: 00:26:26.854 20:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2EyMjRhMjBjZDdhY2IzOGI1ZWIxMGUxMGZlNjUwZGE272zO: 00:26:26.854 20:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:26.854 20:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:26.854 20:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yjc3NjA5MWYyMDUzMTc4YjAxNDg2MGY0ZjgzYzJjYTMzStYr: 00:26:26.854 20:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2EyMjRhMjBjZDdhY2IzOGI1ZWIxMGUxMGZlNjUwZGE272zO: ]] 00:26:26.854 20:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2EyMjRhMjBjZDdhY2IzOGI1ZWIxMGUxMGZlNjUwZGE272zO: 00:26:26.854 20:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:26:26.854 20:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:26.854 20:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:26.854 20:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:26.854 20:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:26.854 20:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:26.854 20:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:26.854 20:26:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.854 20:26:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.854 20:26:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.854 20:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:26.854 20:26:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:26.854 20:26:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:26.854 20:26:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:26.854 20:26:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:26.854 20:26:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:26.854 20:26:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:26.854 20:26:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:26.854 20:26:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:26.854 20:26:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:26.854 20:26:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:26.855 20:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:26.855 20:26:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.855 20:26:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.114 nvme0n1 00:26:27.114 20:26:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.114 20:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:27.114 20:26:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.114 20:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:27.114 20:26:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.114 20:26:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.114 20:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:27.114 20:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:27.114 20:26:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.114 20:26:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.114 20:26:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.114 20:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:27.114 20:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:26:27.114 20:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:27.114 20:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:27.114 20:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:27.114 20:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:27.114 20:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzUwYTA0ZGE4M2E5MjQyYjQzZTc0M2M1NWYwNDg4ZTQxMGMzM2QwMzEzYTA3MWRlk7BzGA==: 00:26:27.114 20:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjFhZjgxOTk5YTIxMjVmNGU4YmZhNTA1NmMwOWQ0YTdi5pLD: 00:26:27.114 20:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:27.114 20:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:27.114 20:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzUwYTA0ZGE4M2E5MjQyYjQzZTc0M2M1NWYwNDg4ZTQxMGMzM2QwMzEzYTA3MWRlk7BzGA==: 00:26:27.114 20:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjFhZjgxOTk5YTIxMjVmNGU4YmZhNTA1NmMwOWQ0YTdi5pLD: ]] 00:26:27.114 20:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjFhZjgxOTk5YTIxMjVmNGU4YmZhNTA1NmMwOWQ0YTdi5pLD: 00:26:27.114 20:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:26:27.114 20:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:27.114 20:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:27.114 20:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:27.114 20:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:27.114 20:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:27.114 20:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:27.114 20:26:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.114 20:26:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.114 20:26:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.114 20:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:27.114 20:26:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:27.114 20:26:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:27.114 20:26:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:27.114 20:26:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:27.114 20:26:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:27.114 20:26:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:27.114 20:26:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:27.114 20:26:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:27.114 20:26:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:27.114 20:26:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:27.114 20:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:27.114 20:26:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.114 20:26:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.373 nvme0n1 00:26:27.373 20:26:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.373 20:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:27.373 20:26:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.373 20:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:27.373 20:26:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.632 20:26:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.632 20:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:27.632 20:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:27.632 20:26:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.632 20:26:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.632 20:26:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.632 20:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:27.632 20:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:26:27.632 20:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:27.632 20:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:27.632 20:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:27.632 20:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:27.632 20:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDQyZWJkYmQ1NGFjY2M5MjNmNDdmYjVhMGQwMWIyZTkyOGU5ZDg4NGFmYjgyMDBjYmUzYTM5NzVkN2Y4NjQzM804Wm8=: 00:26:27.632 20:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:27.632 20:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:27.632 20:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:27.632 20:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDQyZWJkYmQ1NGFjY2M5MjNmNDdmYjVhMGQwMWIyZTkyOGU5ZDg4NGFmYjgyMDBjYmUzYTM5NzVkN2Y4NjQzM804Wm8=: 00:26:27.632 20:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:27.632 20:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:26:27.632 20:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:27.632 20:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:27.632 20:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:27.632 20:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:27.632 20:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:27.632 20:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:27.632 20:26:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.632 20:26:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.632 20:26:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.632 20:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:27.632 20:26:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:27.632 20:26:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:27.632 20:26:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:27.632 20:26:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:27.632 20:26:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:27.632 20:26:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:27.632 20:26:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:27.632 20:26:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:27.632 20:26:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:27.632 20:26:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:27.632 20:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:27.632 20:26:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.632 20:26:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.892 nvme0n1 00:26:27.892 20:26:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.892 20:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:27.892 20:26:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.892 20:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:27.892 20:26:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.892 20:26:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.892 20:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:27.892 20:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:27.892 20:26:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.892 20:26:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.892 20:26:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.892 20:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:27.892 20:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:27.892 20:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:26:27.892 20:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:27.892 20:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:27.892 20:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:27.892 20:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:27.892 20:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTFhM2YyNTlkYTJhYTM0NDVjMTcwMjAwNTEzNmEyYTFKJx34: 00:26:27.892 20:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTdiZDkyNmZhYTI4ZWE0ZWJjMGYzNzZjN2E1Y2E0MzQxMGExNWY4MTk5YTdiYWFjZDUwNDA2MzViYTJjNTE5M6XJZHM=: 00:26:27.892 20:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:27.892 20:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:27.892 20:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTFhM2YyNTlkYTJhYTM0NDVjMTcwMjAwNTEzNmEyYTFKJx34: 00:26:27.892 20:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTdiZDkyNmZhYTI4ZWE0ZWJjMGYzNzZjN2E1Y2E0MzQxMGExNWY4MTk5YTdiYWFjZDUwNDA2MzViYTJjNTE5M6XJZHM=: ]] 00:26:27.892 20:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTdiZDkyNmZhYTI4ZWE0ZWJjMGYzNzZjN2E1Y2E0MzQxMGExNWY4MTk5YTdiYWFjZDUwNDA2MzViYTJjNTE5M6XJZHM=: 00:26:27.892 20:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:26:27.892 20:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:27.892 20:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:27.892 20:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:27.893 20:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:27.893 20:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:27.893 20:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:27.893 20:26:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.893 20:26:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.893 20:26:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.893 20:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:27.893 20:26:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:27.893 20:26:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:27.893 20:26:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:27.893 20:26:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:27.893 20:26:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:27.893 20:26:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:27.893 20:26:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:27.893 20:26:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:27.893 20:26:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:27.893 20:26:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:27.893 20:26:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:27.893 20:26:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.893 20:26:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.462 nvme0n1 00:26:28.462 20:26:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.462 20:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:28.462 20:26:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.462 20:26:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.462 20:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:28.462 20:26:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.462 20:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:28.462 20:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:28.462 20:26:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.462 20:26:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.462 20:26:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.462 20:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:28.462 20:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:26:28.462 20:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:28.462 20:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:28.462 20:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:28.462 20:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:28.462 20:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2IwODIwMTY1M2I1ZWI5Yzc5NmNmYzg2NmVmYzFmM2UwN2IwNjQ4YzNhNjgxOGZjzI7IVw==: 00:26:28.462 20:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWM2NjVlNDRmODU2MWI1YjBhZjgxMzJkNzNjMTQyMTRiNGM3NmVlMjdiZGM4MTI5X6CybQ==: 00:26:28.462 20:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:28.462 20:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:28.462 20:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2IwODIwMTY1M2I1ZWI5Yzc5NmNmYzg2NmVmYzFmM2UwN2IwNjQ4YzNhNjgxOGZjzI7IVw==: 00:26:28.462 20:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWM2NjVlNDRmODU2MWI1YjBhZjgxMzJkNzNjMTQyMTRiNGM3NmVlMjdiZGM4MTI5X6CybQ==: ]] 00:26:28.462 20:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWM2NjVlNDRmODU2MWI1YjBhZjgxMzJkNzNjMTQyMTRiNGM3NmVlMjdiZGM4MTI5X6CybQ==: 00:26:28.462 20:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:26:28.462 20:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:28.462 20:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:28.462 20:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:28.462 20:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:28.462 20:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:28.462 20:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:28.462 20:26:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.462 20:26:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.462 20:26:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.462 20:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:28.462 20:26:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:28.462 20:26:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:28.462 20:26:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:28.462 20:26:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:28.462 20:26:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:28.462 20:26:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:28.462 20:26:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:28.462 20:26:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:28.463 20:26:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:28.463 20:26:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:28.463 20:26:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:28.463 20:26:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.463 20:26:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.031 nvme0n1 00:26:29.032 20:26:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.032 20:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:29.032 20:26:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.032 20:26:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.032 20:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:29.032 20:26:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.032 20:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:29.032 20:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:29.032 20:26:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.032 20:26:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.291 20:26:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.291 20:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:29.291 20:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:26:29.291 20:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:29.291 20:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:29.291 20:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:29.291 20:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:29.291 20:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yjc3NjA5MWYyMDUzMTc4YjAxNDg2MGY0ZjgzYzJjYTMzStYr: 00:26:29.291 20:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2EyMjRhMjBjZDdhY2IzOGI1ZWIxMGUxMGZlNjUwZGE272zO: 00:26:29.291 20:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:29.291 20:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:29.291 20:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yjc3NjA5MWYyMDUzMTc4YjAxNDg2MGY0ZjgzYzJjYTMzStYr: 00:26:29.291 20:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2EyMjRhMjBjZDdhY2IzOGI1ZWIxMGUxMGZlNjUwZGE272zO: ]] 00:26:29.291 20:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2EyMjRhMjBjZDdhY2IzOGI1ZWIxMGUxMGZlNjUwZGE272zO: 00:26:29.291 20:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:26:29.291 20:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:29.291 20:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:29.291 20:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:29.291 20:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:29.291 20:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:29.291 20:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:29.291 20:26:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.291 20:26:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.291 20:26:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.291 20:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:29.291 20:26:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:29.291 20:26:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:29.291 20:26:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:29.291 20:26:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:29.291 20:26:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:29.291 20:26:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:29.291 20:26:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:29.291 20:26:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:29.291 20:26:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:29.291 20:26:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:29.291 20:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:29.291 20:26:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.291 20:26:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.858 nvme0n1 00:26:29.858 20:26:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.858 20:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:29.858 20:26:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.858 20:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:29.858 20:26:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.858 20:26:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.858 20:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:29.858 20:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:29.858 20:26:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.858 20:26:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.858 20:26:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.858 20:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:29.858 20:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:26:29.858 20:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:29.858 20:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:29.858 20:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:29.858 20:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:29.858 20:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzUwYTA0ZGE4M2E5MjQyYjQzZTc0M2M1NWYwNDg4ZTQxMGMzM2QwMzEzYTA3MWRlk7BzGA==: 00:26:29.858 20:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjFhZjgxOTk5YTIxMjVmNGU4YmZhNTA1NmMwOWQ0YTdi5pLD: 00:26:29.858 20:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:29.858 20:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:29.858 20:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzUwYTA0ZGE4M2E5MjQyYjQzZTc0M2M1NWYwNDg4ZTQxMGMzM2QwMzEzYTA3MWRlk7BzGA==: 00:26:29.858 20:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjFhZjgxOTk5YTIxMjVmNGU4YmZhNTA1NmMwOWQ0YTdi5pLD: ]] 00:26:29.858 20:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjFhZjgxOTk5YTIxMjVmNGU4YmZhNTA1NmMwOWQ0YTdi5pLD: 00:26:29.858 20:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:26:29.858 20:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:29.858 20:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:29.858 20:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:29.858 20:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:29.858 20:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:29.858 20:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:29.858 20:26:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.858 20:26:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.858 20:26:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.858 20:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:29.858 20:26:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:29.858 20:26:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:29.858 20:26:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:29.858 20:26:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:29.858 20:26:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:29.858 20:26:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:29.858 20:26:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:29.858 20:26:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:29.858 20:26:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:29.858 20:26:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:29.858 20:26:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:29.858 20:26:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.858 20:26:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.425 nvme0n1 00:26:30.425 20:26:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.425 20:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:30.425 20:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:30.425 20:26:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.425 20:26:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.425 20:26:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.425 20:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:30.425 20:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:30.425 20:26:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.425 20:26:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.425 20:26:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.425 20:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:30.425 20:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:26:30.425 20:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:30.425 20:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:30.425 20:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:30.425 20:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:30.425 20:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDQyZWJkYmQ1NGFjY2M5MjNmNDdmYjVhMGQwMWIyZTkyOGU5ZDg4NGFmYjgyMDBjYmUzYTM5NzVkN2Y4NjQzM804Wm8=: 00:26:30.425 20:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:30.425 20:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:30.425 20:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:30.425 20:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDQyZWJkYmQ1NGFjY2M5MjNmNDdmYjVhMGQwMWIyZTkyOGU5ZDg4NGFmYjgyMDBjYmUzYTM5NzVkN2Y4NjQzM804Wm8=: 00:26:30.425 20:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:30.425 20:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:26:30.425 20:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:30.425 20:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:30.425 20:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:30.425 20:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:30.425 20:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:30.425 20:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:30.425 20:26:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.425 20:26:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.425 20:26:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.425 20:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:30.425 20:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:30.425 20:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:30.425 20:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:30.425 20:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:30.425 20:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:30.425 20:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:30.425 20:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:30.425 20:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:30.425 20:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:30.425 20:26:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:30.425 20:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:30.425 20:26:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.425 20:26:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.989 nvme0n1 00:26:30.989 20:26:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.989 20:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:30.989 20:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:30.989 20:26:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.989 20:26:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.989 20:26:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.989 20:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:30.989 20:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:30.989 20:26:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.989 20:26:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.989 20:26:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.989 20:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:30.989 20:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:30.989 20:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:30.989 20:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:26:30.989 20:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:30.989 20:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:30.989 20:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:30.989 20:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:30.989 20:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTFhM2YyNTlkYTJhYTM0NDVjMTcwMjAwNTEzNmEyYTFKJx34: 00:26:30.989 20:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTdiZDkyNmZhYTI4ZWE0ZWJjMGYzNzZjN2E1Y2E0MzQxMGExNWY4MTk5YTdiYWFjZDUwNDA2MzViYTJjNTE5M6XJZHM=: 00:26:30.989 20:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:30.989 20:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:30.989 20:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTFhM2YyNTlkYTJhYTM0NDVjMTcwMjAwNTEzNmEyYTFKJx34: 00:26:30.989 20:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTdiZDkyNmZhYTI4ZWE0ZWJjMGYzNzZjN2E1Y2E0MzQxMGExNWY4MTk5YTdiYWFjZDUwNDA2MzViYTJjNTE5M6XJZHM=: ]] 00:26:30.989 20:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTdiZDkyNmZhYTI4ZWE0ZWJjMGYzNzZjN2E1Y2E0MzQxMGExNWY4MTk5YTdiYWFjZDUwNDA2MzViYTJjNTE5M6XJZHM=: 00:26:30.989 20:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:26:30.989 20:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:30.989 20:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:30.989 20:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:30.989 20:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:30.989 20:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:30.989 20:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:30.989 20:26:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.989 20:26:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.989 20:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.989 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:30.989 20:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:30.989 20:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:30.989 20:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:30.989 20:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:30.989 20:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:30.989 20:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:30.989 20:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:30.989 20:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:30.989 20:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:30.989 20:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:30.989 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:30.989 20:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.989 20:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.250 nvme0n1 00:26:31.250 20:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.250 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:31.250 20:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.250 20:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.250 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:31.250 20:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.250 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:31.250 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:31.250 20:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.250 20:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.250 20:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.250 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:31.250 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:26:31.250 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:31.250 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:31.250 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:31.250 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:31.250 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2IwODIwMTY1M2I1ZWI5Yzc5NmNmYzg2NmVmYzFmM2UwN2IwNjQ4YzNhNjgxOGZjzI7IVw==: 00:26:31.250 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWM2NjVlNDRmODU2MWI1YjBhZjgxMzJkNzNjMTQyMTRiNGM3NmVlMjdiZGM4MTI5X6CybQ==: 00:26:31.250 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:31.250 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:31.250 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2IwODIwMTY1M2I1ZWI5Yzc5NmNmYzg2NmVmYzFmM2UwN2IwNjQ4YzNhNjgxOGZjzI7IVw==: 00:26:31.250 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWM2NjVlNDRmODU2MWI1YjBhZjgxMzJkNzNjMTQyMTRiNGM3NmVlMjdiZGM4MTI5X6CybQ==: ]] 00:26:31.250 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWM2NjVlNDRmODU2MWI1YjBhZjgxMzJkNzNjMTQyMTRiNGM3NmVlMjdiZGM4MTI5X6CybQ==: 00:26:31.250 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:26:31.250 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:31.250 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:31.250 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:31.250 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:31.250 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:31.250 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:31.250 20:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.250 20:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.250 20:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.250 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:31.250 20:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:31.250 20:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:31.250 20:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:31.250 20:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:31.250 20:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:31.250 20:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:31.250 20:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:31.250 20:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:31.250 20:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:31.250 20:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:31.250 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:31.250 20:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.250 20:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.250 nvme0n1 00:26:31.250 20:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.250 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:31.250 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:31.250 20:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.250 20:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.250 20:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.250 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:31.250 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:31.250 20:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.250 20:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.509 20:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.509 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:31.509 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:26:31.509 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:31.509 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:31.509 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:31.509 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:31.509 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yjc3NjA5MWYyMDUzMTc4YjAxNDg2MGY0ZjgzYzJjYTMzStYr: 00:26:31.509 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2EyMjRhMjBjZDdhY2IzOGI1ZWIxMGUxMGZlNjUwZGE272zO: 00:26:31.509 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:31.509 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:31.509 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yjc3NjA5MWYyMDUzMTc4YjAxNDg2MGY0ZjgzYzJjYTMzStYr: 00:26:31.509 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2EyMjRhMjBjZDdhY2IzOGI1ZWIxMGUxMGZlNjUwZGE272zO: ]] 00:26:31.509 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2EyMjRhMjBjZDdhY2IzOGI1ZWIxMGUxMGZlNjUwZGE272zO: 00:26:31.509 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:26:31.509 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:31.509 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:31.509 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:31.509 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:31.509 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:31.509 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:31.509 20:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.509 20:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.509 20:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.510 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:31.510 20:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:31.510 20:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:31.510 20:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:31.510 20:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:31.510 20:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:31.510 20:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:31.510 20:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:31.510 20:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:31.510 20:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:31.510 20:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:31.510 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:31.510 20:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.510 20:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.510 nvme0n1 00:26:31.510 20:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.510 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:31.510 20:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.510 20:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.510 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:31.510 20:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.510 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:31.510 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:31.510 20:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.510 20:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.510 20:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.510 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:31.510 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:26:31.510 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:31.510 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:31.510 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:31.510 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:31.510 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzUwYTA0ZGE4M2E5MjQyYjQzZTc0M2M1NWYwNDg4ZTQxMGMzM2QwMzEzYTA3MWRlk7BzGA==: 00:26:31.510 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjFhZjgxOTk5YTIxMjVmNGU4YmZhNTA1NmMwOWQ0YTdi5pLD: 00:26:31.510 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:31.510 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:31.510 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzUwYTA0ZGE4M2E5MjQyYjQzZTc0M2M1NWYwNDg4ZTQxMGMzM2QwMzEzYTA3MWRlk7BzGA==: 00:26:31.510 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjFhZjgxOTk5YTIxMjVmNGU4YmZhNTA1NmMwOWQ0YTdi5pLD: ]] 00:26:31.510 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjFhZjgxOTk5YTIxMjVmNGU4YmZhNTA1NmMwOWQ0YTdi5pLD: 00:26:31.510 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:26:31.510 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:31.510 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:31.510 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:31.510 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:31.510 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:31.510 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:31.510 20:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.510 20:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.510 20:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.510 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:31.510 20:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:31.510 20:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:31.510 20:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:31.510 20:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:31.510 20:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:31.510 20:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:31.510 20:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:31.510 20:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:31.510 20:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:31.510 20:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:31.510 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:31.510 20:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.510 20:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.770 nvme0n1 00:26:31.770 20:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.770 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:31.770 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:31.770 20:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.770 20:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.770 20:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.770 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:31.770 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:31.770 20:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.770 20:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.770 20:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.770 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:31.770 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:26:31.770 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:31.770 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:31.770 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:31.770 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:31.770 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDQyZWJkYmQ1NGFjY2M5MjNmNDdmYjVhMGQwMWIyZTkyOGU5ZDg4NGFmYjgyMDBjYmUzYTM5NzVkN2Y4NjQzM804Wm8=: 00:26:31.770 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:31.770 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:31.770 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:31.770 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDQyZWJkYmQ1NGFjY2M5MjNmNDdmYjVhMGQwMWIyZTkyOGU5ZDg4NGFmYjgyMDBjYmUzYTM5NzVkN2Y4NjQzM804Wm8=: 00:26:31.770 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:31.770 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:26:31.770 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:31.770 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:31.770 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:31.770 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:31.770 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:31.770 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:31.770 20:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.770 20:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.770 20:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.770 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:31.770 20:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:31.770 20:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:31.770 20:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:31.770 20:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:31.770 20:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:31.770 20:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:31.770 20:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:31.770 20:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:31.770 20:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:31.770 20:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:31.770 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:31.770 20:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.770 20:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.770 nvme0n1 00:26:31.770 20:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.770 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:31.770 20:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.770 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:31.770 20:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.770 20:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.770 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:31.770 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:31.770 20:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.770 20:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.030 20:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.030 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:32.030 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:32.030 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:26:32.030 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:32.030 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:32.030 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:32.030 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:32.030 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTFhM2YyNTlkYTJhYTM0NDVjMTcwMjAwNTEzNmEyYTFKJx34: 00:26:32.030 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTdiZDkyNmZhYTI4ZWE0ZWJjMGYzNzZjN2E1Y2E0MzQxMGExNWY4MTk5YTdiYWFjZDUwNDA2MzViYTJjNTE5M6XJZHM=: 00:26:32.030 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:32.030 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:32.030 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTFhM2YyNTlkYTJhYTM0NDVjMTcwMjAwNTEzNmEyYTFKJx34: 00:26:32.030 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTdiZDkyNmZhYTI4ZWE0ZWJjMGYzNzZjN2E1Y2E0MzQxMGExNWY4MTk5YTdiYWFjZDUwNDA2MzViYTJjNTE5M6XJZHM=: ]] 00:26:32.030 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTdiZDkyNmZhYTI4ZWE0ZWJjMGYzNzZjN2E1Y2E0MzQxMGExNWY4MTk5YTdiYWFjZDUwNDA2MzViYTJjNTE5M6XJZHM=: 00:26:32.030 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:26:32.030 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:32.030 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:32.030 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:32.030 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:32.030 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:32.030 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:32.030 20:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.030 20:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.030 20:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.030 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:32.030 20:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:32.030 20:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:32.030 20:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:32.030 20:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:32.030 20:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:32.030 20:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:32.030 20:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:32.030 20:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:32.030 20:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:32.030 20:26:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:32.030 20:26:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:32.030 20:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.030 20:26:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.030 nvme0n1 00:26:32.030 20:26:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.030 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:32.030 20:26:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.030 20:26:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.030 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:32.030 20:26:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.030 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:32.030 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:32.030 20:26:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.030 20:26:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.030 20:26:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.030 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:32.030 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:26:32.030 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:32.030 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:32.030 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:32.030 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:32.030 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2IwODIwMTY1M2I1ZWI5Yzc5NmNmYzg2NmVmYzFmM2UwN2IwNjQ4YzNhNjgxOGZjzI7IVw==: 00:26:32.030 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWM2NjVlNDRmODU2MWI1YjBhZjgxMzJkNzNjMTQyMTRiNGM3NmVlMjdiZGM4MTI5X6CybQ==: 00:26:32.030 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:32.030 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:32.030 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2IwODIwMTY1M2I1ZWI5Yzc5NmNmYzg2NmVmYzFmM2UwN2IwNjQ4YzNhNjgxOGZjzI7IVw==: 00:26:32.030 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWM2NjVlNDRmODU2MWI1YjBhZjgxMzJkNzNjMTQyMTRiNGM3NmVlMjdiZGM4MTI5X6CybQ==: ]] 00:26:32.030 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWM2NjVlNDRmODU2MWI1YjBhZjgxMzJkNzNjMTQyMTRiNGM3NmVlMjdiZGM4MTI5X6CybQ==: 00:26:32.030 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:26:32.030 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:32.030 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:32.030 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:32.030 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:32.030 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:32.030 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:32.030 20:26:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.030 20:26:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.030 20:26:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.030 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:32.030 20:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:32.030 20:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:32.030 20:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:32.030 20:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:32.030 20:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:32.030 20:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:32.030 20:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:32.030 20:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:32.030 20:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:32.030 20:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:32.030 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:32.030 20:26:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.030 20:26:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.290 nvme0n1 00:26:32.290 20:26:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.290 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:32.290 20:26:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.290 20:26:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.290 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:32.290 20:26:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.290 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:32.290 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:32.290 20:26:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.290 20:26:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.290 20:26:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.290 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:32.290 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:26:32.290 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:32.290 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:32.290 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:32.290 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:32.290 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yjc3NjA5MWYyMDUzMTc4YjAxNDg2MGY0ZjgzYzJjYTMzStYr: 00:26:32.290 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2EyMjRhMjBjZDdhY2IzOGI1ZWIxMGUxMGZlNjUwZGE272zO: 00:26:32.290 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:32.290 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:32.290 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yjc3NjA5MWYyMDUzMTc4YjAxNDg2MGY0ZjgzYzJjYTMzStYr: 00:26:32.290 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2EyMjRhMjBjZDdhY2IzOGI1ZWIxMGUxMGZlNjUwZGE272zO: ]] 00:26:32.290 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2EyMjRhMjBjZDdhY2IzOGI1ZWIxMGUxMGZlNjUwZGE272zO: 00:26:32.290 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:26:32.290 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:32.290 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:32.290 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:32.290 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:32.290 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:32.290 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:32.290 20:26:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.290 20:26:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.290 20:26:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.290 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:32.290 20:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:32.290 20:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:32.290 20:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:32.290 20:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:32.290 20:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:32.290 20:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:32.290 20:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:32.290 20:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:32.290 20:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:32.290 20:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:32.290 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:32.290 20:26:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.290 20:26:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.550 nvme0n1 00:26:32.550 20:26:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.550 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:32.550 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:32.550 20:26:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.550 20:26:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.550 20:26:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.550 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:32.550 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:32.550 20:26:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.550 20:26:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.550 20:26:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.550 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:32.550 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:26:32.550 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:32.550 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:32.550 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:32.550 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:32.550 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzUwYTA0ZGE4M2E5MjQyYjQzZTc0M2M1NWYwNDg4ZTQxMGMzM2QwMzEzYTA3MWRlk7BzGA==: 00:26:32.550 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjFhZjgxOTk5YTIxMjVmNGU4YmZhNTA1NmMwOWQ0YTdi5pLD: 00:26:32.550 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:32.550 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:32.550 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzUwYTA0ZGE4M2E5MjQyYjQzZTc0M2M1NWYwNDg4ZTQxMGMzM2QwMzEzYTA3MWRlk7BzGA==: 00:26:32.550 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjFhZjgxOTk5YTIxMjVmNGU4YmZhNTA1NmMwOWQ0YTdi5pLD: ]] 00:26:32.550 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjFhZjgxOTk5YTIxMjVmNGU4YmZhNTA1NmMwOWQ0YTdi5pLD: 00:26:32.550 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:26:32.550 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:32.550 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:32.550 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:32.550 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:32.550 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:32.550 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:32.550 20:26:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.550 20:26:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.550 20:26:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.550 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:32.550 20:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:32.550 20:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:32.550 20:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:32.550 20:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:32.550 20:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:32.550 20:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:32.550 20:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:32.550 20:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:32.550 20:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:32.550 20:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:32.550 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:32.550 20:26:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.550 20:26:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.550 nvme0n1 00:26:32.550 20:26:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.550 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:32.550 20:26:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.550 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:32.550 20:26:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.550 20:26:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.813 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:32.813 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:32.813 20:26:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.813 20:26:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.813 20:26:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.813 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:32.813 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:26:32.813 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:32.814 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:32.814 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:32.814 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:32.814 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDQyZWJkYmQ1NGFjY2M5MjNmNDdmYjVhMGQwMWIyZTkyOGU5ZDg4NGFmYjgyMDBjYmUzYTM5NzVkN2Y4NjQzM804Wm8=: 00:26:32.814 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:32.814 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:32.814 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:32.814 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDQyZWJkYmQ1NGFjY2M5MjNmNDdmYjVhMGQwMWIyZTkyOGU5ZDg4NGFmYjgyMDBjYmUzYTM5NzVkN2Y4NjQzM804Wm8=: 00:26:32.814 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:32.814 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:26:32.814 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:32.814 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:32.814 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:32.814 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:32.814 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:32.814 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:32.814 20:26:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.814 20:26:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.814 20:26:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.814 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:32.814 20:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:32.814 20:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:32.814 20:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:32.814 20:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:32.814 20:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:32.814 20:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:32.814 20:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:32.814 20:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:32.814 20:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:32.814 20:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:32.814 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:32.814 20:26:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.814 20:26:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.814 nvme0n1 00:26:32.814 20:26:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.814 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:32.814 20:26:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.814 20:26:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.814 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:32.814 20:26:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.814 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:32.814 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:32.814 20:26:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.814 20:26:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.814 20:26:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.814 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:32.814 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:32.814 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:26:32.814 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:32.814 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:32.814 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:32.814 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:32.814 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTFhM2YyNTlkYTJhYTM0NDVjMTcwMjAwNTEzNmEyYTFKJx34: 00:26:32.814 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTdiZDkyNmZhYTI4ZWE0ZWJjMGYzNzZjN2E1Y2E0MzQxMGExNWY4MTk5YTdiYWFjZDUwNDA2MzViYTJjNTE5M6XJZHM=: 00:26:32.814 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:32.814 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:32.814 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTFhM2YyNTlkYTJhYTM0NDVjMTcwMjAwNTEzNmEyYTFKJx34: 00:26:32.814 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTdiZDkyNmZhYTI4ZWE0ZWJjMGYzNzZjN2E1Y2E0MzQxMGExNWY4MTk5YTdiYWFjZDUwNDA2MzViYTJjNTE5M6XJZHM=: ]] 00:26:32.814 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTdiZDkyNmZhYTI4ZWE0ZWJjMGYzNzZjN2E1Y2E0MzQxMGExNWY4MTk5YTdiYWFjZDUwNDA2MzViYTJjNTE5M6XJZHM=: 00:26:32.814 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:26:32.814 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:32.814 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:32.814 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:32.814 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:32.814 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:32.814 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:32.814 20:26:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.814 20:26:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.814 20:26:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.814 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:32.814 20:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:32.814 20:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:32.814 20:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:32.814 20:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:32.814 20:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:32.814 20:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:32.814 20:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:32.814 20:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:32.814 20:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:32.814 20:26:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:32.814 20:26:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:32.814 20:26:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.814 20:26:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.095 nvme0n1 00:26:33.095 20:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.095 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:33.095 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:33.095 20:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.095 20:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.095 20:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.095 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:33.095 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:33.095 20:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.095 20:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.095 20:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.095 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:33.095 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:26:33.095 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:33.095 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:33.095 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:33.095 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:33.095 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2IwODIwMTY1M2I1ZWI5Yzc5NmNmYzg2NmVmYzFmM2UwN2IwNjQ4YzNhNjgxOGZjzI7IVw==: 00:26:33.095 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWM2NjVlNDRmODU2MWI1YjBhZjgxMzJkNzNjMTQyMTRiNGM3NmVlMjdiZGM4MTI5X6CybQ==: 00:26:33.095 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:33.095 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:33.095 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2IwODIwMTY1M2I1ZWI5Yzc5NmNmYzg2NmVmYzFmM2UwN2IwNjQ4YzNhNjgxOGZjzI7IVw==: 00:26:33.095 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWM2NjVlNDRmODU2MWI1YjBhZjgxMzJkNzNjMTQyMTRiNGM3NmVlMjdiZGM4MTI5X6CybQ==: ]] 00:26:33.095 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWM2NjVlNDRmODU2MWI1YjBhZjgxMzJkNzNjMTQyMTRiNGM3NmVlMjdiZGM4MTI5X6CybQ==: 00:26:33.095 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:26:33.095 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:33.095 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:33.095 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:33.095 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:33.095 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:33.095 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:33.095 20:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.095 20:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.095 20:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.095 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:33.095 20:26:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:33.095 20:26:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:33.095 20:26:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:33.095 20:26:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:33.095 20:26:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:33.095 20:26:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:33.095 20:26:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:33.095 20:26:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:33.095 20:26:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:33.095 20:26:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:33.095 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:33.096 20:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.096 20:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.366 nvme0n1 00:26:33.366 20:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.366 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:33.366 20:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.366 20:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.366 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:33.366 20:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.366 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:33.366 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:33.366 20:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.366 20:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.366 20:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.366 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:33.366 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:26:33.366 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:33.366 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:33.366 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:33.366 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:33.366 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yjc3NjA5MWYyMDUzMTc4YjAxNDg2MGY0ZjgzYzJjYTMzStYr: 00:26:33.366 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2EyMjRhMjBjZDdhY2IzOGI1ZWIxMGUxMGZlNjUwZGE272zO: 00:26:33.366 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:33.366 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:33.366 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yjc3NjA5MWYyMDUzMTc4YjAxNDg2MGY0ZjgzYzJjYTMzStYr: 00:26:33.366 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2EyMjRhMjBjZDdhY2IzOGI1ZWIxMGUxMGZlNjUwZGE272zO: ]] 00:26:33.366 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2EyMjRhMjBjZDdhY2IzOGI1ZWIxMGUxMGZlNjUwZGE272zO: 00:26:33.366 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:26:33.366 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:33.366 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:33.366 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:33.366 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:33.366 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:33.366 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:33.366 20:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.366 20:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.366 20:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.366 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:33.366 20:26:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:33.366 20:26:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:33.366 20:26:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:33.366 20:26:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:33.366 20:26:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:33.366 20:26:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:33.366 20:26:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:33.366 20:26:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:33.366 20:26:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:33.366 20:26:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:33.366 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:33.366 20:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.366 20:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.625 nvme0n1 00:26:33.625 20:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.625 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:33.625 20:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.625 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:33.625 20:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.625 20:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.625 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:33.625 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:33.625 20:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.625 20:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.625 20:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.625 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:33.625 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:26:33.625 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:33.625 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:33.625 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:33.625 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:33.625 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzUwYTA0ZGE4M2E5MjQyYjQzZTc0M2M1NWYwNDg4ZTQxMGMzM2QwMzEzYTA3MWRlk7BzGA==: 00:26:33.625 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjFhZjgxOTk5YTIxMjVmNGU4YmZhNTA1NmMwOWQ0YTdi5pLD: 00:26:33.625 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:33.625 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:33.625 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzUwYTA0ZGE4M2E5MjQyYjQzZTc0M2M1NWYwNDg4ZTQxMGMzM2QwMzEzYTA3MWRlk7BzGA==: 00:26:33.626 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjFhZjgxOTk5YTIxMjVmNGU4YmZhNTA1NmMwOWQ0YTdi5pLD: ]] 00:26:33.626 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjFhZjgxOTk5YTIxMjVmNGU4YmZhNTA1NmMwOWQ0YTdi5pLD: 00:26:33.626 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:26:33.626 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:33.626 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:33.626 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:33.626 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:33.626 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:33.626 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:33.626 20:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.626 20:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.626 20:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.626 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:33.626 20:26:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:33.626 20:26:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:33.626 20:26:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:33.626 20:26:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:33.626 20:26:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:33.626 20:26:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:33.626 20:26:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:33.626 20:26:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:33.626 20:26:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:33.626 20:26:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:33.626 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:33.626 20:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.626 20:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.884 nvme0n1 00:26:33.884 20:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.884 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:33.884 20:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.885 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:33.885 20:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.885 20:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.885 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:33.885 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:33.885 20:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.885 20:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.885 20:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.885 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:33.885 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:26:33.885 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:33.885 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:33.885 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:33.885 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:33.885 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDQyZWJkYmQ1NGFjY2M5MjNmNDdmYjVhMGQwMWIyZTkyOGU5ZDg4NGFmYjgyMDBjYmUzYTM5NzVkN2Y4NjQzM804Wm8=: 00:26:33.885 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:33.885 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:33.885 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:33.885 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDQyZWJkYmQ1NGFjY2M5MjNmNDdmYjVhMGQwMWIyZTkyOGU5ZDg4NGFmYjgyMDBjYmUzYTM5NzVkN2Y4NjQzM804Wm8=: 00:26:33.885 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:33.885 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:26:33.885 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:33.885 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:33.885 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:33.885 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:33.885 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:33.885 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:33.885 20:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.885 20:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.885 20:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.885 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:33.885 20:26:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:33.885 20:26:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:33.885 20:26:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:33.885 20:26:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:33.885 20:26:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:33.885 20:26:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:33.885 20:26:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:33.885 20:26:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:33.885 20:26:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:33.885 20:26:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:33.885 20:26:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:33.885 20:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.885 20:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.144 nvme0n1 00:26:34.144 20:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.144 20:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:34.144 20:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.144 20:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.144 20:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:34.144 20:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.144 20:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:34.144 20:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:34.144 20:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.144 20:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.144 20:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.144 20:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:34.144 20:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:34.144 20:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:26:34.144 20:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:34.144 20:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:34.144 20:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:34.144 20:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:34.144 20:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTFhM2YyNTlkYTJhYTM0NDVjMTcwMjAwNTEzNmEyYTFKJx34: 00:26:34.144 20:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTdiZDkyNmZhYTI4ZWE0ZWJjMGYzNzZjN2E1Y2E0MzQxMGExNWY4MTk5YTdiYWFjZDUwNDA2MzViYTJjNTE5M6XJZHM=: 00:26:34.144 20:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:34.144 20:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:34.144 20:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTFhM2YyNTlkYTJhYTM0NDVjMTcwMjAwNTEzNmEyYTFKJx34: 00:26:34.144 20:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTdiZDkyNmZhYTI4ZWE0ZWJjMGYzNzZjN2E1Y2E0MzQxMGExNWY4MTk5YTdiYWFjZDUwNDA2MzViYTJjNTE5M6XJZHM=: ]] 00:26:34.144 20:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTdiZDkyNmZhYTI4ZWE0ZWJjMGYzNzZjN2E1Y2E0MzQxMGExNWY4MTk5YTdiYWFjZDUwNDA2MzViYTJjNTE5M6XJZHM=: 00:26:34.144 20:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:26:34.144 20:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:34.144 20:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:34.144 20:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:34.144 20:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:34.144 20:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:34.144 20:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:34.144 20:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.144 20:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.144 20:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.144 20:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:34.144 20:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:34.144 20:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:34.144 20:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:34.144 20:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:34.144 20:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:34.144 20:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:34.144 20:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:34.144 20:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:34.144 20:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:34.144 20:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:34.144 20:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:34.144 20:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.144 20:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.403 nvme0n1 00:26:34.403 20:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.403 20:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:34.403 20:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.403 20:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.403 20:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:34.661 20:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.661 20:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:34.661 20:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:34.661 20:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.661 20:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.661 20:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.661 20:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:34.661 20:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:26:34.661 20:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:34.661 20:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:34.661 20:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:34.661 20:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:34.661 20:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2IwODIwMTY1M2I1ZWI5Yzc5NmNmYzg2NmVmYzFmM2UwN2IwNjQ4YzNhNjgxOGZjzI7IVw==: 00:26:34.661 20:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWM2NjVlNDRmODU2MWI1YjBhZjgxMzJkNzNjMTQyMTRiNGM3NmVlMjdiZGM4MTI5X6CybQ==: 00:26:34.661 20:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:34.661 20:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:34.661 20:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2IwODIwMTY1M2I1ZWI5Yzc5NmNmYzg2NmVmYzFmM2UwN2IwNjQ4YzNhNjgxOGZjzI7IVw==: 00:26:34.661 20:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWM2NjVlNDRmODU2MWI1YjBhZjgxMzJkNzNjMTQyMTRiNGM3NmVlMjdiZGM4MTI5X6CybQ==: ]] 00:26:34.661 20:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWM2NjVlNDRmODU2MWI1YjBhZjgxMzJkNzNjMTQyMTRiNGM3NmVlMjdiZGM4MTI5X6CybQ==: 00:26:34.661 20:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:26:34.661 20:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:34.661 20:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:34.661 20:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:34.661 20:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:34.661 20:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:34.661 20:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:34.661 20:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.661 20:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.661 20:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.661 20:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:34.661 20:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:34.661 20:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:34.661 20:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:34.661 20:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:34.661 20:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:34.661 20:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:34.661 20:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:34.661 20:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:34.661 20:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:34.661 20:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:34.661 20:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:34.661 20:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.661 20:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.920 nvme0n1 00:26:34.920 20:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.920 20:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:34.920 20:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.920 20:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.920 20:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:34.920 20:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.920 20:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:34.920 20:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:34.920 20:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.920 20:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.920 20:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.920 20:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:34.920 20:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:26:34.920 20:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:34.920 20:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:34.920 20:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:34.920 20:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:34.920 20:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yjc3NjA5MWYyMDUzMTc4YjAxNDg2MGY0ZjgzYzJjYTMzStYr: 00:26:34.920 20:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2EyMjRhMjBjZDdhY2IzOGI1ZWIxMGUxMGZlNjUwZGE272zO: 00:26:34.920 20:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:34.920 20:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:34.920 20:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yjc3NjA5MWYyMDUzMTc4YjAxNDg2MGY0ZjgzYzJjYTMzStYr: 00:26:34.920 20:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2EyMjRhMjBjZDdhY2IzOGI1ZWIxMGUxMGZlNjUwZGE272zO: ]] 00:26:34.920 20:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2EyMjRhMjBjZDdhY2IzOGI1ZWIxMGUxMGZlNjUwZGE272zO: 00:26:34.920 20:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:26:34.920 20:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:34.920 20:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:34.920 20:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:34.920 20:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:34.920 20:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:34.920 20:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:34.920 20:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.920 20:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.920 20:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.920 20:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:34.920 20:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:34.920 20:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:34.920 20:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:34.920 20:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:34.920 20:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:34.920 20:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:34.920 20:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:34.920 20:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:34.920 20:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:34.920 20:26:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:34.920 20:26:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:34.920 20:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.920 20:26:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.487 nvme0n1 00:26:35.487 20:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.487 20:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:35.487 20:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:35.487 20:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.487 20:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.487 20:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.487 20:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:35.487 20:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:35.487 20:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.487 20:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.487 20:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.487 20:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:35.487 20:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:26:35.487 20:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:35.487 20:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:35.487 20:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:35.487 20:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:35.487 20:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzUwYTA0ZGE4M2E5MjQyYjQzZTc0M2M1NWYwNDg4ZTQxMGMzM2QwMzEzYTA3MWRlk7BzGA==: 00:26:35.487 20:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjFhZjgxOTk5YTIxMjVmNGU4YmZhNTA1NmMwOWQ0YTdi5pLD: 00:26:35.487 20:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:35.487 20:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:35.487 20:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzUwYTA0ZGE4M2E5MjQyYjQzZTc0M2M1NWYwNDg4ZTQxMGMzM2QwMzEzYTA3MWRlk7BzGA==: 00:26:35.487 20:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjFhZjgxOTk5YTIxMjVmNGU4YmZhNTA1NmMwOWQ0YTdi5pLD: ]] 00:26:35.487 20:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjFhZjgxOTk5YTIxMjVmNGU4YmZhNTA1NmMwOWQ0YTdi5pLD: 00:26:35.487 20:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:26:35.487 20:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:35.487 20:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:35.487 20:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:35.487 20:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:35.487 20:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:35.487 20:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:35.487 20:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.487 20:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.487 20:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.487 20:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:35.487 20:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:35.487 20:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:35.487 20:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:35.487 20:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:35.487 20:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:35.487 20:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:35.487 20:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:35.487 20:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:35.487 20:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:35.487 20:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:35.487 20:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:35.487 20:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.487 20:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.746 nvme0n1 00:26:35.746 20:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.746 20:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:35.746 20:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:35.746 20:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.746 20:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.746 20:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.746 20:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:35.746 20:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:35.746 20:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.746 20:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.746 20:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.746 20:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:35.746 20:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:26:35.746 20:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:35.747 20:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:35.747 20:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:35.747 20:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:35.747 20:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDQyZWJkYmQ1NGFjY2M5MjNmNDdmYjVhMGQwMWIyZTkyOGU5ZDg4NGFmYjgyMDBjYmUzYTM5NzVkN2Y4NjQzM804Wm8=: 00:26:35.747 20:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:35.747 20:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:35.747 20:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:35.747 20:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDQyZWJkYmQ1NGFjY2M5MjNmNDdmYjVhMGQwMWIyZTkyOGU5ZDg4NGFmYjgyMDBjYmUzYTM5NzVkN2Y4NjQzM804Wm8=: 00:26:35.747 20:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:35.747 20:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:26:35.747 20:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:35.747 20:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:35.747 20:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:35.747 20:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:35.747 20:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:35.747 20:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:35.747 20:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.747 20:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.747 20:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.747 20:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:35.747 20:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:35.747 20:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:35.747 20:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:35.747 20:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:35.747 20:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:35.747 20:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:35.747 20:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:35.747 20:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:35.747 20:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:35.747 20:26:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:35.747 20:26:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:35.747 20:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.747 20:26:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.005 nvme0n1 00:26:36.005 20:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.005 20:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:36.005 20:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:36.005 20:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.005 20:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.005 20:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.263 20:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:36.263 20:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:36.263 20:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.263 20:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.263 20:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.263 20:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:36.263 20:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:36.263 20:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:26:36.263 20:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:36.263 20:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:36.263 20:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:36.263 20:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:36.263 20:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTFhM2YyNTlkYTJhYTM0NDVjMTcwMjAwNTEzNmEyYTFKJx34: 00:26:36.263 20:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTdiZDkyNmZhYTI4ZWE0ZWJjMGYzNzZjN2E1Y2E0MzQxMGExNWY4MTk5YTdiYWFjZDUwNDA2MzViYTJjNTE5M6XJZHM=: 00:26:36.263 20:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:36.263 20:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:36.263 20:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTFhM2YyNTlkYTJhYTM0NDVjMTcwMjAwNTEzNmEyYTFKJx34: 00:26:36.263 20:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTdiZDkyNmZhYTI4ZWE0ZWJjMGYzNzZjN2E1Y2E0MzQxMGExNWY4MTk5YTdiYWFjZDUwNDA2MzViYTJjNTE5M6XJZHM=: ]] 00:26:36.263 20:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTdiZDkyNmZhYTI4ZWE0ZWJjMGYzNzZjN2E1Y2E0MzQxMGExNWY4MTk5YTdiYWFjZDUwNDA2MzViYTJjNTE5M6XJZHM=: 00:26:36.263 20:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:26:36.263 20:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:36.263 20:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:36.263 20:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:36.263 20:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:36.263 20:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:36.263 20:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:36.263 20:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.263 20:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.263 20:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.263 20:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:36.263 20:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:36.264 20:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:36.264 20:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:36.264 20:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:36.264 20:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:36.264 20:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:36.264 20:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:36.264 20:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:36.264 20:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:36.264 20:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:36.264 20:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:36.264 20:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.264 20:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.830 nvme0n1 00:26:36.830 20:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.831 20:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:36.831 20:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:36.831 20:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.831 20:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.831 20:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.831 20:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:36.831 20:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:36.831 20:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.831 20:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.831 20:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.831 20:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:36.831 20:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:26:36.831 20:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:36.831 20:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:36.831 20:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:36.831 20:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:36.831 20:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2IwODIwMTY1M2I1ZWI5Yzc5NmNmYzg2NmVmYzFmM2UwN2IwNjQ4YzNhNjgxOGZjzI7IVw==: 00:26:36.831 20:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWM2NjVlNDRmODU2MWI1YjBhZjgxMzJkNzNjMTQyMTRiNGM3NmVlMjdiZGM4MTI5X6CybQ==: 00:26:36.831 20:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:36.831 20:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:36.831 20:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2IwODIwMTY1M2I1ZWI5Yzc5NmNmYzg2NmVmYzFmM2UwN2IwNjQ4YzNhNjgxOGZjzI7IVw==: 00:26:36.831 20:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWM2NjVlNDRmODU2MWI1YjBhZjgxMzJkNzNjMTQyMTRiNGM3NmVlMjdiZGM4MTI5X6CybQ==: ]] 00:26:36.831 20:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWM2NjVlNDRmODU2MWI1YjBhZjgxMzJkNzNjMTQyMTRiNGM3NmVlMjdiZGM4MTI5X6CybQ==: 00:26:36.831 20:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:26:36.831 20:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:36.831 20:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:36.831 20:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:36.831 20:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:36.831 20:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:36.831 20:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:36.831 20:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.831 20:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.831 20:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.831 20:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:36.831 20:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:36.831 20:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:36.831 20:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:36.831 20:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:36.831 20:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:36.831 20:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:36.831 20:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:36.831 20:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:36.831 20:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:36.831 20:26:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:36.831 20:26:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:36.831 20:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.831 20:26:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.399 nvme0n1 00:26:37.399 20:26:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:37.399 20:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:37.399 20:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:37.399 20:26:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:37.399 20:26:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.399 20:26:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:37.399 20:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:37.399 20:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:37.399 20:26:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:37.399 20:26:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.399 20:26:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:37.399 20:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:37.399 20:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:26:37.399 20:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:37.399 20:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:37.399 20:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:37.399 20:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:37.399 20:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yjc3NjA5MWYyMDUzMTc4YjAxNDg2MGY0ZjgzYzJjYTMzStYr: 00:26:37.399 20:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2EyMjRhMjBjZDdhY2IzOGI1ZWIxMGUxMGZlNjUwZGE272zO: 00:26:37.399 20:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:37.399 20:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:37.399 20:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yjc3NjA5MWYyMDUzMTc4YjAxNDg2MGY0ZjgzYzJjYTMzStYr: 00:26:37.399 20:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2EyMjRhMjBjZDdhY2IzOGI1ZWIxMGUxMGZlNjUwZGE272zO: ]] 00:26:37.399 20:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2EyMjRhMjBjZDdhY2IzOGI1ZWIxMGUxMGZlNjUwZGE272zO: 00:26:37.399 20:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:26:37.399 20:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:37.399 20:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:37.399 20:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:37.399 20:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:37.399 20:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:37.399 20:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:37.399 20:26:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:37.399 20:26:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.399 20:26:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:37.399 20:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:37.399 20:26:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:37.399 20:26:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:37.399 20:26:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:37.399 20:26:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:37.399 20:26:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:37.399 20:26:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:37.399 20:26:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:37.399 20:26:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:37.399 20:26:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:37.399 20:26:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:37.399 20:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:37.399 20:26:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:37.399 20:26:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.966 nvme0n1 00:26:37.966 20:26:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:37.966 20:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:37.966 20:26:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:37.966 20:26:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.966 20:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:37.966 20:26:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:37.966 20:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:37.966 20:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:37.966 20:26:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:37.966 20:26:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.966 20:26:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:37.966 20:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:37.966 20:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:26:37.966 20:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:37.966 20:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:37.966 20:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:37.966 20:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:37.966 20:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzUwYTA0ZGE4M2E5MjQyYjQzZTc0M2M1NWYwNDg4ZTQxMGMzM2QwMzEzYTA3MWRlk7BzGA==: 00:26:37.966 20:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjFhZjgxOTk5YTIxMjVmNGU4YmZhNTA1NmMwOWQ0YTdi5pLD: 00:26:37.966 20:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:37.966 20:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:37.966 20:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzUwYTA0ZGE4M2E5MjQyYjQzZTc0M2M1NWYwNDg4ZTQxMGMzM2QwMzEzYTA3MWRlk7BzGA==: 00:26:37.966 20:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjFhZjgxOTk5YTIxMjVmNGU4YmZhNTA1NmMwOWQ0YTdi5pLD: ]] 00:26:37.966 20:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjFhZjgxOTk5YTIxMjVmNGU4YmZhNTA1NmMwOWQ0YTdi5pLD: 00:26:37.966 20:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:26:37.966 20:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:37.966 20:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:37.966 20:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:37.966 20:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:37.966 20:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:37.966 20:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:37.966 20:26:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:37.966 20:26:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.966 20:26:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:37.966 20:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:37.966 20:26:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:37.966 20:26:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:37.966 20:26:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:37.966 20:26:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:37.966 20:26:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:37.966 20:26:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:37.966 20:26:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:37.966 20:26:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:37.966 20:26:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:37.966 20:26:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:37.966 20:26:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:37.966 20:26:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:37.966 20:26:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.532 nvme0n1 00:26:38.532 20:26:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.532 20:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:38.532 20:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:38.532 20:26:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.532 20:26:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.532 20:26:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.532 20:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:38.532 20:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:38.532 20:26:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.532 20:26:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.533 20:26:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.533 20:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:38.533 20:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:26:38.533 20:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:38.533 20:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:38.533 20:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:38.533 20:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:38.533 20:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDQyZWJkYmQ1NGFjY2M5MjNmNDdmYjVhMGQwMWIyZTkyOGU5ZDg4NGFmYjgyMDBjYmUzYTM5NzVkN2Y4NjQzM804Wm8=: 00:26:38.533 20:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:38.533 20:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:38.533 20:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:38.533 20:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDQyZWJkYmQ1NGFjY2M5MjNmNDdmYjVhMGQwMWIyZTkyOGU5ZDg4NGFmYjgyMDBjYmUzYTM5NzVkN2Y4NjQzM804Wm8=: 00:26:38.533 20:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:38.533 20:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:26:38.533 20:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:38.533 20:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:38.533 20:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:38.533 20:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:38.533 20:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:38.533 20:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:38.533 20:26:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.533 20:26:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.533 20:26:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.533 20:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:38.533 20:26:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:38.533 20:26:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:38.533 20:26:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:38.533 20:26:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:38.533 20:26:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:38.533 20:26:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:38.533 20:26:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:38.533 20:26:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:38.533 20:26:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:38.533 20:26:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:38.533 20:26:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:38.533 20:26:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.533 20:26:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.100 nvme0n1 00:26:39.100 20:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.100 20:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:39.100 20:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.100 20:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:39.100 20:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.100 20:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.100 20:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:39.100 20:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:39.100 20:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.100 20:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.100 20:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.100 20:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:39.100 20:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:39.100 20:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:39.100 20:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:39.100 20:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:39.100 20:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2IwODIwMTY1M2I1ZWI5Yzc5NmNmYzg2NmVmYzFmM2UwN2IwNjQ4YzNhNjgxOGZjzI7IVw==: 00:26:39.100 20:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWM2NjVlNDRmODU2MWI1YjBhZjgxMzJkNzNjMTQyMTRiNGM3NmVlMjdiZGM4MTI5X6CybQ==: 00:26:39.100 20:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:39.100 20:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:39.100 20:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2IwODIwMTY1M2I1ZWI5Yzc5NmNmYzg2NmVmYzFmM2UwN2IwNjQ4YzNhNjgxOGZjzI7IVw==: 00:26:39.100 20:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWM2NjVlNDRmODU2MWI1YjBhZjgxMzJkNzNjMTQyMTRiNGM3NmVlMjdiZGM4MTI5X6CybQ==: ]] 00:26:39.100 20:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWM2NjVlNDRmODU2MWI1YjBhZjgxMzJkNzNjMTQyMTRiNGM3NmVlMjdiZGM4MTI5X6CybQ==: 00:26:39.100 20:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:39.100 20:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.100 20:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.100 20:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.100 20:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:26:39.100 20:26:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:39.100 20:26:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:39.100 20:26:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:39.100 20:26:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:39.100 20:26:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:39.100 20:26:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:39.100 20:26:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:39.100 20:26:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:39.100 20:26:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:39.100 20:26:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:39.100 20:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:39.100 20:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:26:39.100 20:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:39.100 20:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:39.100 20:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:39.100 20:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:39.100 20:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:39.100 20:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:39.100 20:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.100 20:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.100 2024/07/14 20:26:28 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:26:39.100 request: 00:26:39.100 { 00:26:39.100 "method": "bdev_nvme_attach_controller", 00:26:39.100 "params": { 00:26:39.100 "name": "nvme0", 00:26:39.100 "trtype": "tcp", 00:26:39.100 "traddr": "10.0.0.1", 00:26:39.100 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:39.100 "adrfam": "ipv4", 00:26:39.100 "trsvcid": "4420", 00:26:39.100 "subnqn": "nqn.2024-02.io.spdk:cnode0" 00:26:39.100 } 00:26:39.100 } 00:26:39.100 Got JSON-RPC error response 00:26:39.100 GoRPCClient: error on JSON-RPC call 00:26:39.100 20:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:39.100 20:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:26:39.100 20:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:39.100 20:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:39.100 20:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:39.100 20:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:26:39.100 20:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:26:39.100 20:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.100 20:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.100 20:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.359 20:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:26:39.359 20:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:26:39.359 20:26:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:39.359 20:26:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:39.359 20:26:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:39.359 20:26:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:39.359 20:26:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:39.359 20:26:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:39.359 20:26:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:39.359 20:26:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:39.359 20:26:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:39.359 20:26:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:39.359 20:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:39.359 20:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:26:39.359 20:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:39.359 20:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:39.359 20:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:39.359 20:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:39.359 20:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:39.359 20:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:39.359 20:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.359 20:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.360 2024/07/14 20:26:28 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 dhchap_key:key2 hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:26:39.360 request: 00:26:39.360 { 00:26:39.360 "method": "bdev_nvme_attach_controller", 00:26:39.360 "params": { 00:26:39.360 "name": "nvme0", 00:26:39.360 "trtype": "tcp", 00:26:39.360 "traddr": "10.0.0.1", 00:26:39.360 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:39.360 "adrfam": "ipv4", 00:26:39.360 "trsvcid": "4420", 00:26:39.360 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:39.360 "dhchap_key": "key2" 00:26:39.360 } 00:26:39.360 } 00:26:39.360 Got JSON-RPC error response 00:26:39.360 GoRPCClient: error on JSON-RPC call 00:26:39.360 20:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:39.360 20:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:26:39.360 20:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:39.360 20:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:39.360 20:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:39.360 20:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:26:39.360 20:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.360 20:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.360 20:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:26:39.360 20:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.360 20:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:26:39.360 20:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:26:39.360 20:26:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:39.360 20:26:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:39.360 20:26:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:39.360 20:26:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:39.360 20:26:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:39.360 20:26:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:39.360 20:26:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:39.360 20:26:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:39.360 20:26:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:39.360 20:26:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:39.360 20:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:39.360 20:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:26:39.360 20:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:39.360 20:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:39.360 20:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:39.360 20:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:39.360 20:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:39.360 20:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:39.360 20:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.360 20:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.360 2024/07/14 20:26:28 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 dhchap_ctrlr_key:ckey2 dhchap_key:key1 hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:26:39.360 request: 00:26:39.360 { 00:26:39.360 "method": "bdev_nvme_attach_controller", 00:26:39.360 "params": { 00:26:39.360 "name": "nvme0", 00:26:39.360 "trtype": "tcp", 00:26:39.360 "traddr": "10.0.0.1", 00:26:39.360 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:39.360 "adrfam": "ipv4", 00:26:39.360 "trsvcid": "4420", 00:26:39.360 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:39.360 "dhchap_key": "key1", 00:26:39.360 "dhchap_ctrlr_key": "ckey2" 00:26:39.360 } 00:26:39.360 } 00:26:39.360 Got JSON-RPC error response 00:26:39.360 GoRPCClient: error on JSON-RPC call 00:26:39.360 20:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:39.360 20:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:26:39.360 20:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:39.360 20:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:39.360 20:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:39.360 20:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:26:39.360 20:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:26:39.360 20:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:26:39.360 20:26:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:39.360 20:26:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:26:39.360 20:26:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:39.360 20:26:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:26:39.360 20:26:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:39.360 20:26:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:39.360 rmmod nvme_tcp 00:26:39.360 rmmod nvme_fabrics 00:26:39.360 20:26:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:39.360 20:26:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:26:39.360 20:26:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:26:39.360 20:26:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 110124 ']' 00:26:39.360 20:26:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 110124 00:26:39.360 20:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@946 -- # '[' -z 110124 ']' 00:26:39.360 20:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@950 -- # kill -0 110124 00:26:39.360 20:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@951 -- # uname 00:26:39.360 20:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:39.360 20:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 110124 00:26:39.619 20:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:26:39.619 killing process with pid 110124 00:26:39.619 20:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:26:39.619 20:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 110124' 00:26:39.619 20:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@965 -- # kill 110124 00:26:39.619 20:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@970 -- # wait 110124 00:26:39.878 20:26:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:39.878 20:26:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:39.878 20:26:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:39.878 20:26:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:39.878 20:26:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:39.878 20:26:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:39.878 20:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:39.878 20:26:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:39.878 20:26:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:26:39.878 20:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:39.878 20:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:39.878 20:26:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:26:39.878 20:26:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:26:39.878 20:26:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:26:39.878 20:26:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:39.878 20:26:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:39.878 20:26:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:39.878 20:26:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:39.878 20:26:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:26:39.878 20:26:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:26:39.878 20:26:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:26:40.446 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:40.705 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:26:40.705 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:26:40.705 20:26:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.ZTd /tmp/spdk.key-null.h66 /tmp/spdk.key-sha256.yku /tmp/spdk.key-sha384.OG5 /tmp/spdk.key-sha512.8EM /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:26:40.705 20:26:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:26:41.274 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:41.274 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:26:41.274 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:26:41.274 00:26:41.274 real 0m33.608s 00:26:41.274 user 0m30.776s 00:26:41.274 sys 0m3.981s 00:26:41.274 20:26:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:41.274 20:26:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.274 ************************************ 00:26:41.274 END TEST nvmf_auth_host 00:26:41.274 ************************************ 00:26:41.274 20:26:30 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:26:41.274 20:26:30 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:41.274 20:26:30 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:26:41.274 20:26:30 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:41.274 20:26:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:41.274 ************************************ 00:26:41.274 START TEST nvmf_digest 00:26:41.274 ************************************ 00:26:41.274 20:26:30 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:41.274 * Looking for test storage... 00:26:41.274 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:41.274 20:26:30 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:41.274 20:26:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:26:41.274 20:26:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:41.274 20:26:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:41.274 20:26:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:41.274 20:26:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:41.274 20:26:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:41.274 20:26:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:41.274 20:26:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:41.274 20:26:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:41.274 20:26:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:41.274 20:26:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:41.274 20:26:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:26:41.274 20:26:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:26:41.274 20:26:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:41.274 20:26:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:41.274 20:26:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:41.274 20:26:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:41.274 20:26:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:41.274 20:26:30 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:41.274 20:26:30 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:41.274 20:26:30 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:41.274 20:26:30 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.274 20:26:30 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.274 20:26:30 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.274 20:26:30 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:26:41.274 20:26:30 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.274 20:26:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:26:41.274 20:26:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:41.274 20:26:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:41.274 20:26:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:41.274 20:26:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:41.274 20:26:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:41.274 20:26:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:41.274 20:26:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:41.274 20:26:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:41.274 20:26:30 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:26:41.274 20:26:30 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:26:41.274 20:26:30 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:26:41.274 20:26:30 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:26:41.274 20:26:30 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:26:41.274 20:26:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:41.274 20:26:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:41.274 20:26:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:41.274 20:26:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:41.275 20:26:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:41.275 20:26:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:41.275 20:26:30 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:41.275 20:26:30 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:41.275 20:26:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:26:41.275 20:26:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:26:41.275 20:26:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:26:41.275 20:26:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:26:41.275 20:26:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:26:41.275 20:26:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@432 -- # nvmf_veth_init 00:26:41.275 20:26:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:41.275 20:26:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:41.275 20:26:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:41.275 20:26:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:26:41.275 20:26:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:41.275 20:26:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:41.275 20:26:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:41.275 20:26:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:41.275 20:26:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:41.275 20:26:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:41.275 20:26:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:41.275 20:26:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:41.275 20:26:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:26:41.535 20:26:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:26:41.535 Cannot find device "nvmf_tgt_br" 00:26:41.535 20:26:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # true 00:26:41.535 20:26:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:26:41.535 Cannot find device "nvmf_tgt_br2" 00:26:41.535 20:26:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # true 00:26:41.535 20:26:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:26:41.535 20:26:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:26:41.535 Cannot find device "nvmf_tgt_br" 00:26:41.535 20:26:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # true 00:26:41.535 20:26:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:26:41.535 Cannot find device "nvmf_tgt_br2" 00:26:41.535 20:26:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # true 00:26:41.535 20:26:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:26:41.535 20:26:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:26:41.535 20:26:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:41.535 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:41.535 20:26:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # true 00:26:41.535 20:26:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:41.535 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:41.535 20:26:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # true 00:26:41.535 20:26:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:26:41.535 20:26:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:41.535 20:26:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:41.535 20:26:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:41.535 20:26:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:41.535 20:26:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:41.535 20:26:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:41.535 20:26:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:41.535 20:26:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:41.535 20:26:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:26:41.535 20:26:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:26:41.535 20:26:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:26:41.535 20:26:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:26:41.535 20:26:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:41.535 20:26:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:41.535 20:26:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:41.535 20:26:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:26:41.535 20:26:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:26:41.535 20:26:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:26:41.795 20:26:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:41.795 20:26:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:41.795 20:26:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:41.795 20:26:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:41.795 20:26:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:26:41.795 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:41.795 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.151 ms 00:26:41.795 00:26:41.795 --- 10.0.0.2 ping statistics --- 00:26:41.795 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:41.795 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:26:41.795 20:26:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:26:41.795 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:41.795 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 00:26:41.795 00:26:41.795 --- 10.0.0.3 ping statistics --- 00:26:41.795 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:41.795 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:26:41.795 20:26:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:41.795 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:41.795 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:26:41.795 00:26:41.795 --- 10.0.0.1 ping statistics --- 00:26:41.795 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:41.795 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:26:41.795 20:26:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:41.795 20:26:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@433 -- # return 0 00:26:41.795 20:26:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:41.795 20:26:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:41.795 20:26:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:41.795 20:26:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:41.795 20:26:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:41.795 20:26:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:41.795 20:26:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:41.795 20:26:30 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:41.795 20:26:30 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:26:41.795 20:26:30 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:26:41.795 20:26:30 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:26:41.795 20:26:30 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:41.795 20:26:30 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:41.795 ************************************ 00:26:41.795 START TEST nvmf_digest_clean 00:26:41.795 ************************************ 00:26:41.795 20:26:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1121 -- # run_digest 00:26:41.795 20:26:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:26:41.795 20:26:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:26:41.795 20:26:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:26:41.795 20:26:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:26:41.795 20:26:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:26:41.795 20:26:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:41.795 20:26:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@720 -- # xtrace_disable 00:26:41.795 20:26:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:41.795 20:26:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=111693 00:26:41.795 20:26:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 111693 00:26:41.795 20:26:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:41.795 20:26:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 111693 ']' 00:26:41.795 20:26:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:41.795 20:26:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:41.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:41.795 20:26:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:41.795 20:26:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:41.795 20:26:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:41.796 [2024-07-14 20:26:30.794607] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:26:41.796 [2024-07-14 20:26:30.794708] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:42.055 [2024-07-14 20:26:30.939608] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:42.055 [2024-07-14 20:26:31.060072] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:42.055 [2024-07-14 20:26:31.060145] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:42.055 [2024-07-14 20:26:31.060167] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:42.055 [2024-07-14 20:26:31.060179] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:42.055 [2024-07-14 20:26:31.060188] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:42.055 [2024-07-14 20:26:31.060222] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:42.991 20:26:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:42.991 20:26:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:26:42.991 20:26:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:42.991 20:26:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:42.991 20:26:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:42.991 20:26:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:42.991 20:26:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:26:42.991 20:26:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:26:42.991 20:26:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:26:42.992 20:26:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.992 20:26:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:42.992 null0 00:26:42.992 [2024-07-14 20:26:31.983085] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:42.992 [2024-07-14 20:26:32.007236] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:42.992 20:26:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.992 20:26:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:26:42.992 20:26:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:42.992 20:26:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:42.992 20:26:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:42.992 20:26:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:42.992 20:26:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:42.992 20:26:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:42.992 20:26:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=111743 00:26:42.992 20:26:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:42.992 20:26:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 111743 /var/tmp/bperf.sock 00:26:42.992 20:26:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 111743 ']' 00:26:42.992 20:26:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:42.992 20:26:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:42.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:42.992 20:26:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:42.992 20:26:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:42.992 20:26:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:42.992 [2024-07-14 20:26:32.069789] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:26:42.992 [2024-07-14 20:26:32.069935] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111743 ] 00:26:43.251 [2024-07-14 20:26:32.213973] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:43.251 [2024-07-14 20:26:32.329168] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:44.187 20:26:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:44.187 20:26:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:26:44.187 20:26:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:44.187 20:26:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:44.187 20:26:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:44.444 20:26:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:44.444 20:26:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:44.702 nvme0n1 00:26:44.702 20:26:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:44.702 20:26:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:44.960 Running I/O for 2 seconds... 00:26:46.863 00:26:46.863 Latency(us) 00:26:46.863 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:46.863 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:46.863 nvme0n1 : 2.00 23680.65 92.50 0.00 0.00 5399.70 2472.49 15847.80 00:26:46.863 =================================================================================================================== 00:26:46.863 Total : 23680.65 92.50 0.00 0.00 5399.70 2472.49 15847.80 00:26:46.863 0 00:26:46.863 20:26:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:46.863 20:26:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:46.863 20:26:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:46.863 20:26:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:46.863 | select(.opcode=="crc32c") 00:26:46.863 | "\(.module_name) \(.executed)"' 00:26:46.863 20:26:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:47.122 20:26:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:47.122 20:26:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:47.122 20:26:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:47.122 20:26:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:47.122 20:26:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 111743 00:26:47.122 20:26:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 111743 ']' 00:26:47.122 20:26:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 111743 00:26:47.122 20:26:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:26:47.122 20:26:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:47.122 20:26:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 111743 00:26:47.122 20:26:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:26:47.122 killing process with pid 111743 00:26:47.122 20:26:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:26:47.122 20:26:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 111743' 00:26:47.122 Received shutdown signal, test time was about 2.000000 seconds 00:26:47.122 00:26:47.122 Latency(us) 00:26:47.122 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:47.122 =================================================================================================================== 00:26:47.122 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:47.122 20:26:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 111743 00:26:47.122 20:26:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 111743 00:26:47.382 20:26:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:26:47.382 20:26:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:47.382 20:26:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:47.382 20:26:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:47.382 20:26:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:47.382 20:26:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:47.382 20:26:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:47.382 20:26:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=111834 00:26:47.382 20:26:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 111834 /var/tmp/bperf.sock 00:26:47.382 20:26:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 111834 ']' 00:26:47.382 20:26:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:47.382 20:26:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:47.382 20:26:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:47.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:47.382 20:26:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:47.382 20:26:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:47.382 20:26:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:47.382 [2024-07-14 20:26:36.436963] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:26:47.382 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:47.382 Zero copy mechanism will not be used. 00:26:47.382 [2024-07-14 20:26:36.437061] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111834 ] 00:26:47.642 [2024-07-14 20:26:36.577479] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:47.642 [2024-07-14 20:26:36.649750] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:48.579 20:26:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:48.579 20:26:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:26:48.579 20:26:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:48.579 20:26:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:48.579 20:26:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:48.838 20:26:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:48.838 20:26:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:49.096 nvme0n1 00:26:49.096 20:26:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:49.096 20:26:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:49.096 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:49.096 Zero copy mechanism will not be used. 00:26:49.096 Running I/O for 2 seconds... 00:26:51.630 00:26:51.630 Latency(us) 00:26:51.630 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:51.630 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:51.630 nvme0n1 : 2.00 9712.90 1214.11 0.00 0.00 1644.05 506.41 11021.96 00:26:51.630 =================================================================================================================== 00:26:51.630 Total : 9712.90 1214.11 0.00 0.00 1644.05 506.41 11021.96 00:26:51.630 0 00:26:51.630 20:26:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:51.630 20:26:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:51.630 20:26:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:51.630 20:26:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:51.630 | select(.opcode=="crc32c") 00:26:51.630 | "\(.module_name) \(.executed)"' 00:26:51.630 20:26:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:51.630 20:26:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:51.630 20:26:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:51.630 20:26:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:51.630 20:26:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:51.630 20:26:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 111834 00:26:51.630 20:26:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 111834 ']' 00:26:51.630 20:26:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 111834 00:26:51.630 20:26:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:26:51.630 20:26:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:51.630 20:26:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 111834 00:26:51.630 20:26:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:26:51.630 killing process with pid 111834 00:26:51.630 Received shutdown signal, test time was about 2.000000 seconds 00:26:51.630 00:26:51.630 Latency(us) 00:26:51.630 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:51.630 =================================================================================================================== 00:26:51.630 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:51.630 20:26:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:26:51.630 20:26:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 111834' 00:26:51.630 20:26:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 111834 00:26:51.630 20:26:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 111834 00:26:51.630 20:26:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:26:51.630 20:26:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:51.630 20:26:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:51.630 20:26:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:51.630 20:26:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:51.630 20:26:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:51.630 20:26:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:51.630 20:26:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=111931 00:26:51.630 20:26:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 111931 /var/tmp/bperf.sock 00:26:51.630 20:26:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 111931 ']' 00:26:51.630 20:26:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:51.630 20:26:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:51.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:51.630 20:26:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:51.630 20:26:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:51.630 20:26:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:51.630 20:26:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:51.889 [2024-07-14 20:26:40.724626] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:26:51.889 [2024-07-14 20:26:40.724722] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111931 ] 00:26:51.889 [2024-07-14 20:26:40.862614] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:51.889 [2024-07-14 20:26:40.952075] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:52.826 20:26:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:52.826 20:26:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:26:52.826 20:26:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:52.826 20:26:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:52.826 20:26:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:53.086 20:26:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:53.086 20:26:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:53.344 nvme0n1 00:26:53.344 20:26:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:53.344 20:26:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:53.344 Running I/O for 2 seconds... 00:26:55.874 00:26:55.874 Latency(us) 00:26:55.874 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:55.875 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:55.875 nvme0n1 : 2.00 27363.62 106.89 0.00 0.00 4671.94 2368.23 8460.10 00:26:55.875 =================================================================================================================== 00:26:55.875 Total : 27363.62 106.89 0.00 0.00 4671.94 2368.23 8460.10 00:26:55.875 0 00:26:55.875 20:26:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:55.875 20:26:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:55.875 20:26:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:55.875 | select(.opcode=="crc32c") 00:26:55.875 | "\(.module_name) \(.executed)"' 00:26:55.875 20:26:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:55.875 20:26:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:55.875 20:26:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:55.875 20:26:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:55.875 20:26:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:55.875 20:26:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:55.875 20:26:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 111931 00:26:55.875 20:26:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 111931 ']' 00:26:55.875 20:26:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 111931 00:26:55.875 20:26:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:26:55.875 20:26:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:55.875 20:26:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 111931 00:26:55.875 20:26:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:26:55.875 20:26:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:26:55.875 killing process with pid 111931 00:26:55.875 20:26:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 111931' 00:26:55.875 Received shutdown signal, test time was about 2.000000 seconds 00:26:55.875 00:26:55.875 Latency(us) 00:26:55.875 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:55.875 =================================================================================================================== 00:26:55.875 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:55.875 20:26:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 111931 00:26:55.875 20:26:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 111931 00:26:55.875 20:26:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:26:55.875 20:26:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:55.875 20:26:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:55.875 20:26:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:55.875 20:26:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:55.875 20:26:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:55.875 20:26:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:55.875 20:26:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=112016 00:26:55.875 20:26:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 112016 /var/tmp/bperf.sock 00:26:55.875 20:26:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 112016 ']' 00:26:55.875 20:26:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:55.875 20:26:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:55.875 20:26:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:55.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:55.875 20:26:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:55.875 20:26:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:55.875 20:26:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:56.146 [2024-07-14 20:26:45.005502] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:26:56.146 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:56.146 Zero copy mechanism will not be used. 00:26:56.146 [2024-07-14 20:26:45.005624] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112016 ] 00:26:56.146 [2024-07-14 20:26:45.141606] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:56.146 [2024-07-14 20:26:45.210462] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:57.127 20:26:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:57.127 20:26:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:26:57.127 20:26:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:57.127 20:26:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:57.127 20:26:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:57.386 20:26:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:57.386 20:26:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:57.644 nvme0n1 00:26:57.644 20:26:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:57.644 20:26:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:57.644 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:57.644 Zero copy mechanism will not be used. 00:26:57.644 Running I/O for 2 seconds... 00:27:00.172 00:27:00.172 Latency(us) 00:27:00.172 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:00.173 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:00.173 nvme0n1 : 2.00 8394.77 1049.35 0.00 0.00 1901.54 1482.01 8877.15 00:27:00.173 =================================================================================================================== 00:27:00.173 Total : 8394.77 1049.35 0.00 0.00 1901.54 1482.01 8877.15 00:27:00.173 0 00:27:00.173 20:26:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:00.173 20:26:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:00.173 20:26:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:00.173 20:26:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:00.173 | select(.opcode=="crc32c") 00:27:00.173 | "\(.module_name) \(.executed)"' 00:27:00.173 20:26:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:00.173 20:26:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:00.173 20:26:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:00.173 20:26:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:00.173 20:26:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:00.173 20:26:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 112016 00:27:00.173 20:26:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 112016 ']' 00:27:00.173 20:26:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 112016 00:27:00.173 20:26:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:27:00.173 20:26:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:00.173 20:26:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 112016 00:27:00.173 20:26:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:00.173 20:26:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:00.173 killing process with pid 112016 00:27:00.173 20:26:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 112016' 00:27:00.173 Received shutdown signal, test time was about 2.000000 seconds 00:27:00.173 00:27:00.173 Latency(us) 00:27:00.173 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:00.173 =================================================================================================================== 00:27:00.173 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:00.173 20:26:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 112016 00:27:00.173 20:26:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 112016 00:27:00.173 20:26:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 111693 00:27:00.173 20:26:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 111693 ']' 00:27:00.173 20:26:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 111693 00:27:00.173 20:26:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:27:00.173 20:26:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:00.173 20:26:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 111693 00:27:00.173 20:26:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:00.173 20:26:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:00.173 killing process with pid 111693 00:27:00.173 20:26:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 111693' 00:27:00.173 20:26:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 111693 00:27:00.173 20:26:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 111693 00:27:00.431 00:27:00.431 real 0m18.758s 00:27:00.431 user 0m35.224s 00:27:00.431 sys 0m4.853s 00:27:00.432 20:26:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:00.432 20:26:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:00.432 ************************************ 00:27:00.432 END TEST nvmf_digest_clean 00:27:00.432 ************************************ 00:27:00.690 20:26:49 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:27:00.690 20:26:49 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:00.690 20:26:49 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:00.690 20:26:49 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:00.690 ************************************ 00:27:00.690 START TEST nvmf_digest_error 00:27:00.690 ************************************ 00:27:00.690 20:26:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1121 -- # run_digest_error 00:27:00.690 20:26:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:27:00.690 20:26:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:00.690 20:26:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:00.690 20:26:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:00.690 20:26:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=112129 00:27:00.690 20:26:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 112129 00:27:00.690 20:26:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 112129 ']' 00:27:00.690 20:26:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:00.690 20:26:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:00.690 20:26:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:00.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:00.690 20:26:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:00.690 20:26:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:00.690 20:26:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:00.690 [2024-07-14 20:26:49.605153] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:27:00.690 [2024-07-14 20:26:49.605260] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:00.690 [2024-07-14 20:26:49.739363] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:00.947 [2024-07-14 20:26:49.810538] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:00.948 [2024-07-14 20:26:49.810607] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:00.948 [2024-07-14 20:26:49.810616] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:00.948 [2024-07-14 20:26:49.810623] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:00.948 [2024-07-14 20:26:49.810629] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:00.948 [2024-07-14 20:26:49.810658] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:01.513 20:26:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:01.513 20:26:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:27:01.513 20:26:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:01.513 20:26:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:01.513 20:26:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:01.513 20:26:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:01.513 20:26:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:27:01.513 20:26:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.513 20:26:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:01.513 [2024-07-14 20:26:50.575267] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:27:01.513 20:26:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.513 20:26:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:27:01.513 20:26:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:27:01.513 20:26:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.513 20:26:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:01.771 null0 00:27:01.771 [2024-07-14 20:26:50.708709] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:01.771 [2024-07-14 20:26:50.732834] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:01.771 20:26:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.771 20:26:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:27:01.771 20:26:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:01.771 20:26:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:27:01.771 20:26:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:27:01.771 20:26:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:27:01.771 20:26:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=112174 00:27:01.771 20:26:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 112174 /var/tmp/bperf.sock 00:27:01.771 20:26:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 112174 ']' 00:27:01.771 20:26:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:01.771 20:26:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:27:01.771 20:26:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:01.771 20:26:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:01.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:01.771 20:26:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:01.771 20:26:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:01.771 [2024-07-14 20:26:50.800741] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:27:01.771 [2024-07-14 20:26:50.800881] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112174 ] 00:27:02.029 [2024-07-14 20:26:50.941719] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:02.029 [2024-07-14 20:26:51.020415] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:02.964 20:26:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:02.964 20:26:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:27:02.964 20:26:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:02.964 20:26:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:02.964 20:26:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:02.964 20:26:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.964 20:26:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:02.964 20:26:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.964 20:26:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:02.964 20:26:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:03.222 nvme0n1 00:27:03.480 20:26:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:03.480 20:26:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.480 20:26:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:03.480 20:26:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.480 20:26:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:03.480 20:26:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:03.480 Running I/O for 2 seconds... 00:27:03.480 [2024-07-14 20:26:52.464505] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:03.480 [2024-07-14 20:26:52.464556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:7657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.480 [2024-07-14 20:26:52.464571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.480 [2024-07-14 20:26:52.473887] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:03.480 [2024-07-14 20:26:52.473915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:23890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.480 [2024-07-14 20:26:52.473927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.480 [2024-07-14 20:26:52.485513] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:03.480 [2024-07-14 20:26:52.485543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:7772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.480 [2024-07-14 20:26:52.485555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.480 [2024-07-14 20:26:52.497687] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:03.480 [2024-07-14 20:26:52.497717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:18399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.480 [2024-07-14 20:26:52.497729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.480 [2024-07-14 20:26:52.508420] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:03.480 [2024-07-14 20:26:52.508450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:4887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.480 [2024-07-14 20:26:52.508461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.480 [2024-07-14 20:26:52.518081] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:03.480 [2024-07-14 20:26:52.518111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.480 [2024-07-14 20:26:52.518123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.480 [2024-07-14 20:26:52.530898] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:03.480 [2024-07-14 20:26:52.530950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:13480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.480 [2024-07-14 20:26:52.530966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.480 [2024-07-14 20:26:52.541816] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:03.480 [2024-07-14 20:26:52.541846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:2730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.481 [2024-07-14 20:26:52.541869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.481 [2024-07-14 20:26:52.552304] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:03.481 [2024-07-14 20:26:52.552333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:1077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.481 [2024-07-14 20:26:52.552344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.481 [2024-07-14 20:26:52.563615] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:03.481 [2024-07-14 20:26:52.563647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:8435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.481 [2024-07-14 20:26:52.563660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.738 [2024-07-14 20:26:52.574835] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:03.738 [2024-07-14 20:26:52.574876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:5488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.738 [2024-07-14 20:26:52.574889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.739 [2024-07-14 20:26:52.586212] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:03.739 [2024-07-14 20:26:52.586241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:22773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.739 [2024-07-14 20:26:52.586252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.739 [2024-07-14 20:26:52.598582] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:03.739 [2024-07-14 20:26:52.598613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:6024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.739 [2024-07-14 20:26:52.598625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.739 [2024-07-14 20:26:52.609916] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:03.739 [2024-07-14 20:26:52.609944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:22661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.739 [2024-07-14 20:26:52.609956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.739 [2024-07-14 20:26:52.621653] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:03.739 [2024-07-14 20:26:52.621683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.739 [2024-07-14 20:26:52.621695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.739 [2024-07-14 20:26:52.633333] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:03.739 [2024-07-14 20:26:52.633363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:3325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.739 [2024-07-14 20:26:52.633374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.739 [2024-07-14 20:26:52.645000] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:03.739 [2024-07-14 20:26:52.645029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:2827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.739 [2024-07-14 20:26:52.645040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.739 [2024-07-14 20:26:52.655143] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:03.739 [2024-07-14 20:26:52.655173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:15780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.739 [2024-07-14 20:26:52.655184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.739 [2024-07-14 20:26:52.666305] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:03.739 [2024-07-14 20:26:52.666335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:19111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.739 [2024-07-14 20:26:52.666346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.739 [2024-07-14 20:26:52.678056] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:03.739 [2024-07-14 20:26:52.678084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:14635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.739 [2024-07-14 20:26:52.678096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.739 [2024-07-14 20:26:52.687273] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:03.739 [2024-07-14 20:26:52.687302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:13923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.739 [2024-07-14 20:26:52.687314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.739 [2024-07-14 20:26:52.700839] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:03.739 [2024-07-14 20:26:52.700878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:16914 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.739 [2024-07-14 20:26:52.700891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.739 [2024-07-14 20:26:52.711406] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:03.739 [2024-07-14 20:26:52.711435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:6628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.739 [2024-07-14 20:26:52.711447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.739 [2024-07-14 20:26:52.722812] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:03.739 [2024-07-14 20:26:52.722841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.739 [2024-07-14 20:26:52.722863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.739 [2024-07-14 20:26:52.735466] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:03.739 [2024-07-14 20:26:52.735495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:7401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.739 [2024-07-14 20:26:52.735509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.739 [2024-07-14 20:26:52.745619] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:03.739 [2024-07-14 20:26:52.745648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:11459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.739 [2024-07-14 20:26:52.745659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.739 [2024-07-14 20:26:52.758055] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:03.739 [2024-07-14 20:26:52.758084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.739 [2024-07-14 20:26:52.758096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.739 [2024-07-14 20:26:52.769659] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:03.739 [2024-07-14 20:26:52.769688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:2204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.739 [2024-07-14 20:26:52.769699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.739 [2024-07-14 20:26:52.781628] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:03.739 [2024-07-14 20:26:52.781658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:12195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.739 [2024-07-14 20:26:52.781669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.739 [2024-07-14 20:26:52.794357] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:03.739 [2024-07-14 20:26:52.794403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:17369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.739 [2024-07-14 20:26:52.794415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.739 [2024-07-14 20:26:52.804484] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:03.739 [2024-07-14 20:26:52.804513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:14286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.739 [2024-07-14 20:26:52.804525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.739 [2024-07-14 20:26:52.814623] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:03.739 [2024-07-14 20:26:52.814654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:7079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.739 [2024-07-14 20:26:52.814665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.997 [2024-07-14 20:26:52.827612] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:03.997 [2024-07-14 20:26:52.827641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:20297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.997 [2024-07-14 20:26:52.827653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.997 [2024-07-14 20:26:52.840218] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:03.997 [2024-07-14 20:26:52.840247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.997 [2024-07-14 20:26:52.840259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.997 [2024-07-14 20:26:52.852019] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:03.997 [2024-07-14 20:26:52.852048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:5995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.997 [2024-07-14 20:26:52.852065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.997 [2024-07-14 20:26:52.861447] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:03.997 [2024-07-14 20:26:52.861475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.997 [2024-07-14 20:26:52.861486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.997 [2024-07-14 20:26:52.872837] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:03.997 [2024-07-14 20:26:52.872874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.997 [2024-07-14 20:26:52.872886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.997 [2024-07-14 20:26:52.883701] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:03.997 [2024-07-14 20:26:52.883729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:18358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.997 [2024-07-14 20:26:52.883740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.997 [2024-07-14 20:26:52.893928] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:03.997 [2024-07-14 20:26:52.893955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:21137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.997 [2024-07-14 20:26:52.893967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.997 [2024-07-14 20:26:52.905493] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:03.997 [2024-07-14 20:26:52.905521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.997 [2024-07-14 20:26:52.905532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.997 [2024-07-14 20:26:52.916332] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:03.997 [2024-07-14 20:26:52.916361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:9898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.997 [2024-07-14 20:26:52.916373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.997 [2024-07-14 20:26:52.926312] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:03.997 [2024-07-14 20:26:52.926340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:20149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.997 [2024-07-14 20:26:52.926356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.997 [2024-07-14 20:26:52.936243] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:03.997 [2024-07-14 20:26:52.936272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:21105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.997 [2024-07-14 20:26:52.936283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.997 [2024-07-14 20:26:52.946346] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:03.997 [2024-07-14 20:26:52.946374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.997 [2024-07-14 20:26:52.946385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.997 [2024-07-14 20:26:52.958442] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:03.997 [2024-07-14 20:26:52.958470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:2114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.997 [2024-07-14 20:26:52.958481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.997 [2024-07-14 20:26:52.968816] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:03.997 [2024-07-14 20:26:52.968845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.998 [2024-07-14 20:26:52.968866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.998 [2024-07-14 20:26:52.978691] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:03.998 [2024-07-14 20:26:52.978720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.998 [2024-07-14 20:26:52.978731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.998 [2024-07-14 20:26:52.990282] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:03.998 [2024-07-14 20:26:52.990311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:1383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.998 [2024-07-14 20:26:52.990323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.998 [2024-07-14 20:26:52.999436] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:03.998 [2024-07-14 20:26:52.999464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:17029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.998 [2024-07-14 20:26:52.999475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.998 [2024-07-14 20:26:53.009927] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:03.998 [2024-07-14 20:26:53.009955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:11073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.998 [2024-07-14 20:26:53.009967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.998 [2024-07-14 20:26:53.021005] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:03.998 [2024-07-14 20:26:53.021033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:19079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.998 [2024-07-14 20:26:53.021044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.998 [2024-07-14 20:26:53.032168] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:03.998 [2024-07-14 20:26:53.032197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.998 [2024-07-14 20:26:53.032208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.998 [2024-07-14 20:26:53.042839] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:03.998 [2024-07-14 20:26:53.042876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:10906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.998 [2024-07-14 20:26:53.042887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.998 [2024-07-14 20:26:53.052217] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:03.998 [2024-07-14 20:26:53.052245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:38 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.998 [2024-07-14 20:26:53.052256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.998 [2024-07-14 20:26:53.062212] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:03.998 [2024-07-14 20:26:53.062240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.998 [2024-07-14 20:26:53.062252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.998 [2024-07-14 20:26:53.073676] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:03.998 [2024-07-14 20:26:53.073705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:14295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.998 [2024-07-14 20:26:53.073716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.256 [2024-07-14 20:26:53.084900] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:04.256 [2024-07-14 20:26:53.084953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:8536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.256 [2024-07-14 20:26:53.084965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.256 [2024-07-14 20:26:53.096456] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:04.256 [2024-07-14 20:26:53.096485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:19881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.256 [2024-07-14 20:26:53.096496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.256 [2024-07-14 20:26:53.108368] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:04.256 [2024-07-14 20:26:53.108397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.256 [2024-07-14 20:26:53.108408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.256 [2024-07-14 20:26:53.117136] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:04.256 [2024-07-14 20:26:53.117164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.256 [2024-07-14 20:26:53.117175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.256 [2024-07-14 20:26:53.129440] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:04.256 [2024-07-14 20:26:53.129470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:21830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.256 [2024-07-14 20:26:53.129482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.256 [2024-07-14 20:26:53.138749] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:04.256 [2024-07-14 20:26:53.138778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.256 [2024-07-14 20:26:53.138789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.256 [2024-07-14 20:26:53.149419] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:04.256 [2024-07-14 20:26:53.149447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:1602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.256 [2024-07-14 20:26:53.149458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.256 [2024-07-14 20:26:53.160955] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:04.256 [2024-07-14 20:26:53.160983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:6943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.256 [2024-07-14 20:26:53.160994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.256 [2024-07-14 20:26:53.171660] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:04.256 [2024-07-14 20:26:53.171689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:13469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.256 [2024-07-14 20:26:53.171700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.256 [2024-07-14 20:26:53.183578] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:04.256 [2024-07-14 20:26:53.183607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:2635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.256 [2024-07-14 20:26:53.183619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.256 [2024-07-14 20:26:53.192588] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:04.256 [2024-07-14 20:26:53.192617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:25053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.256 [2024-07-14 20:26:53.192628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.256 [2024-07-14 20:26:53.204327] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:04.256 [2024-07-14 20:26:53.204357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:22568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.256 [2024-07-14 20:26:53.204368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.256 [2024-07-14 20:26:53.215744] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:04.256 [2024-07-14 20:26:53.215773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.256 [2024-07-14 20:26:53.215783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.256 [2024-07-14 20:26:53.227633] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:04.256 [2024-07-14 20:26:53.227661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:12864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.256 [2024-07-14 20:26:53.227672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.256 [2024-07-14 20:26:53.237064] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:04.256 [2024-07-14 20:26:53.237092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:22974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.256 [2024-07-14 20:26:53.237103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.256 [2024-07-14 20:26:53.247624] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:04.256 [2024-07-14 20:26:53.247653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:7134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.256 [2024-07-14 20:26:53.247663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.257 [2024-07-14 20:26:53.259555] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:04.257 [2024-07-14 20:26:53.259583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:13522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.257 [2024-07-14 20:26:53.259594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.257 [2024-07-14 20:26:53.270305] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:04.257 [2024-07-14 20:26:53.270333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.257 [2024-07-14 20:26:53.270345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.257 [2024-07-14 20:26:53.279977] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:04.257 [2024-07-14 20:26:53.280003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.257 [2024-07-14 20:26:53.280014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.257 [2024-07-14 20:26:53.291862] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:04.257 [2024-07-14 20:26:53.291899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.257 [2024-07-14 20:26:53.291910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.257 [2024-07-14 20:26:53.301345] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:04.257 [2024-07-14 20:26:53.301375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:9871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.257 [2024-07-14 20:26:53.301387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.257 [2024-07-14 20:26:53.312421] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:04.257 [2024-07-14 20:26:53.312449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:6515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.257 [2024-07-14 20:26:53.312460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.257 [2024-07-14 20:26:53.321751] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:04.257 [2024-07-14 20:26:53.321780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:17853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.257 [2024-07-14 20:26:53.321796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.257 [2024-07-14 20:26:53.333715] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:04.257 [2024-07-14 20:26:53.333745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:16813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.257 [2024-07-14 20:26:53.333764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.515 [2024-07-14 20:26:53.346997] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:04.515 [2024-07-14 20:26:53.347027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:20850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.515 [2024-07-14 20:26:53.347041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.515 [2024-07-14 20:26:53.357344] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:04.515 [2024-07-14 20:26:53.357373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:23207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.515 [2024-07-14 20:26:53.357384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.515 [2024-07-14 20:26:53.367020] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:04.515 [2024-07-14 20:26:53.367050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:15179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.515 [2024-07-14 20:26:53.367062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.515 [2024-07-14 20:26:53.378953] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:04.515 [2024-07-14 20:26:53.378983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.515 [2024-07-14 20:26:53.378995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.515 [2024-07-14 20:26:53.390650] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:04.515 [2024-07-14 20:26:53.390679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:16419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.515 [2024-07-14 20:26:53.390690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.515 [2024-07-14 20:26:53.401818] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:04.515 [2024-07-14 20:26:53.401846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:16612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.515 [2024-07-14 20:26:53.401868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.515 [2024-07-14 20:26:53.412495] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:04.515 [2024-07-14 20:26:53.412523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:17404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.515 [2024-07-14 20:26:53.412534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.515 [2024-07-14 20:26:53.421743] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:04.515 [2024-07-14 20:26:53.421772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:9003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.515 [2024-07-14 20:26:53.421783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.515 [2024-07-14 20:26:53.434310] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:04.515 [2024-07-14 20:26:53.434338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:4345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.515 [2024-07-14 20:26:53.434349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.515 [2024-07-14 20:26:53.442401] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:04.515 [2024-07-14 20:26:53.442430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:3611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.515 [2024-07-14 20:26:53.442441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.515 [2024-07-14 20:26:53.454874] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:04.515 [2024-07-14 20:26:53.454902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.515 [2024-07-14 20:26:53.454920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.515 [2024-07-14 20:26:53.467209] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:04.515 [2024-07-14 20:26:53.467254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:12206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.515 [2024-07-14 20:26:53.467281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.515 [2024-07-14 20:26:53.477702] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:04.515 [2024-07-14 20:26:53.477731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:9212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.515 [2024-07-14 20:26:53.477742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.515 [2024-07-14 20:26:53.487412] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:04.515 [2024-07-14 20:26:53.487440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:16240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.515 [2024-07-14 20:26:53.487451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.515 [2024-07-14 20:26:53.498154] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:04.515 [2024-07-14 20:26:53.498183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:23865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.515 [2024-07-14 20:26:53.498194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.515 [2024-07-14 20:26:53.508569] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:04.515 [2024-07-14 20:26:53.508599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:8691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.515 [2024-07-14 20:26:53.508617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.515 [2024-07-14 20:26:53.520344] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:04.515 [2024-07-14 20:26:53.520372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:10467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.515 [2024-07-14 20:26:53.520383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.515 [2024-07-14 20:26:53.531046] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:04.515 [2024-07-14 20:26:53.531074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:12087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.515 [2024-07-14 20:26:53.531086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.515 [2024-07-14 20:26:53.541327] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:04.515 [2024-07-14 20:26:53.541355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:13490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.515 [2024-07-14 20:26:53.541366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.515 [2024-07-14 20:26:53.550894] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:04.515 [2024-07-14 20:26:53.550944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.515 [2024-07-14 20:26:53.550962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.515 [2024-07-14 20:26:53.561015] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:04.515 [2024-07-14 20:26:53.561043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:21783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.515 [2024-07-14 20:26:53.561055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.515 [2024-07-14 20:26:53.571602] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:04.516 [2024-07-14 20:26:53.571630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:9972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.516 [2024-07-14 20:26:53.571641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.516 [2024-07-14 20:26:53.582722] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:04.516 [2024-07-14 20:26:53.582749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.516 [2024-07-14 20:26:53.582760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.516 [2024-07-14 20:26:53.592700] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:04.516 [2024-07-14 20:26:53.592729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:19104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.516 [2024-07-14 20:26:53.592740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.774 [2024-07-14 20:26:53.602767] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:04.774 [2024-07-14 20:26:53.602812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:4524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.774 [2024-07-14 20:26:53.602824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.774 [2024-07-14 20:26:53.614893] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:04.774 [2024-07-14 20:26:53.614945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:15705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.774 [2024-07-14 20:26:53.614972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.774 [2024-07-14 20:26:53.626472] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:04.774 [2024-07-14 20:26:53.626499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:2777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.774 [2024-07-14 20:26:53.626511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.774 [2024-07-14 20:26:53.637079] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:04.774 [2024-07-14 20:26:53.637107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:13807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.774 [2024-07-14 20:26:53.637118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.774 [2024-07-14 20:26:53.647555] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:04.774 [2024-07-14 20:26:53.647583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:14702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.774 [2024-07-14 20:26:53.647594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.774 [2024-07-14 20:26:53.657780] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:04.774 [2024-07-14 20:26:53.657808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:18506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.774 [2024-07-14 20:26:53.657819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.774 [2024-07-14 20:26:53.669665] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:04.774 [2024-07-14 20:26:53.669693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:25238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.774 [2024-07-14 20:26:53.669704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.774 [2024-07-14 20:26:53.680691] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:04.774 [2024-07-14 20:26:53.680720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:15566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.774 [2024-07-14 20:26:53.680731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.774 [2024-07-14 20:26:53.688604] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:04.774 [2024-07-14 20:26:53.688631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:7889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.774 [2024-07-14 20:26:53.688642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.774 [2024-07-14 20:26:53.700611] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:04.774 [2024-07-14 20:26:53.700638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:7898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.774 [2024-07-14 20:26:53.700649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.774 [2024-07-14 20:26:53.711868] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:04.774 [2024-07-14 20:26:53.711904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.774 [2024-07-14 20:26:53.711915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.774 [2024-07-14 20:26:53.722714] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:04.774 [2024-07-14 20:26:53.722744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:23081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.774 [2024-07-14 20:26:53.722756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.774 [2024-07-14 20:26:53.732717] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:04.774 [2024-07-14 20:26:53.732746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:25087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.774 [2024-07-14 20:26:53.732757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.774 [2024-07-14 20:26:53.744013] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:04.774 [2024-07-14 20:26:53.744041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.774 [2024-07-14 20:26:53.744051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.775 [2024-07-14 20:26:53.754519] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:04.775 [2024-07-14 20:26:53.754547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:19246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.775 [2024-07-14 20:26:53.754558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.775 [2024-07-14 20:26:53.765986] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:04.775 [2024-07-14 20:26:53.766014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.775 [2024-07-14 20:26:53.766025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.775 [2024-07-14 20:26:53.777108] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:04.775 [2024-07-14 20:26:53.777137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:1742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.775 [2024-07-14 20:26:53.777149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.775 [2024-07-14 20:26:53.787312] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:04.775 [2024-07-14 20:26:53.787358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:20761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.775 [2024-07-14 20:26:53.787369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.775 [2024-07-14 20:26:53.798169] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:04.775 [2024-07-14 20:26:53.798199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:15161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.775 [2024-07-14 20:26:53.798210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.775 [2024-07-14 20:26:53.809139] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:04.775 [2024-07-14 20:26:53.809168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:10479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.775 [2024-07-14 20:26:53.809179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.775 [2024-07-14 20:26:53.819856] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:04.775 [2024-07-14 20:26:53.819908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.775 [2024-07-14 20:26:53.819920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.775 [2024-07-14 20:26:53.830180] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:04.775 [2024-07-14 20:26:53.830209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:7900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.775 [2024-07-14 20:26:53.830235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.775 [2024-07-14 20:26:53.841088] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:04.775 [2024-07-14 20:26:53.841117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:19378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.775 [2024-07-14 20:26:53.841128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.775 [2024-07-14 20:26:53.852865] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:04.775 [2024-07-14 20:26:53.852892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:19233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.775 [2024-07-14 20:26:53.852904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.034 [2024-07-14 20:26:53.865839] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:05.034 [2024-07-14 20:26:53.865874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:5917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.034 [2024-07-14 20:26:53.865885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.034 [2024-07-14 20:26:53.876808] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:05.034 [2024-07-14 20:26:53.876836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:24687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.034 [2024-07-14 20:26:53.876847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.034 [2024-07-14 20:26:53.886939] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:05.034 [2024-07-14 20:26:53.886983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:8939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.034 [2024-07-14 20:26:53.886994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.034 [2024-07-14 20:26:53.896764] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:05.034 [2024-07-14 20:26:53.896792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:21853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.034 [2024-07-14 20:26:53.896803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.034 [2024-07-14 20:26:53.907762] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:05.034 [2024-07-14 20:26:53.907790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:5820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.034 [2024-07-14 20:26:53.907801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.034 [2024-07-14 20:26:53.919064] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:05.034 [2024-07-14 20:26:53.919094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:16595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.034 [2024-07-14 20:26:53.919106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.034 [2024-07-14 20:26:53.929389] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:05.034 [2024-07-14 20:26:53.929419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:17375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.034 [2024-07-14 20:26:53.929431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.034 [2024-07-14 20:26:53.941176] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:05.034 [2024-07-14 20:26:53.941205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:17789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.034 [2024-07-14 20:26:53.941216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.034 [2024-07-14 20:26:53.951623] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:05.034 [2024-07-14 20:26:53.951651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:9275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.034 [2024-07-14 20:26:53.951662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.034 [2024-07-14 20:26:53.961807] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:05.034 [2024-07-14 20:26:53.961835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:23611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.034 [2024-07-14 20:26:53.961845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.034 [2024-07-14 20:26:53.973456] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:05.034 [2024-07-14 20:26:53.973485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:8258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.034 [2024-07-14 20:26:53.973495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.034 [2024-07-14 20:26:53.985019] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:05.034 [2024-07-14 20:26:53.985047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:19854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.034 [2024-07-14 20:26:53.985058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.034 [2024-07-14 20:26:53.996135] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:05.034 [2024-07-14 20:26:53.996163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.034 [2024-07-14 20:26:53.996175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.034 [2024-07-14 20:26:54.005209] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:05.034 [2024-07-14 20:26:54.005237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:23051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.034 [2024-07-14 20:26:54.005248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.034 [2024-07-14 20:26:54.016527] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:05.034 [2024-07-14 20:26:54.016555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:21105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.034 [2024-07-14 20:26:54.016567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.034 [2024-07-14 20:26:54.026691] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:05.034 [2024-07-14 20:26:54.026721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:16843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.034 [2024-07-14 20:26:54.026732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.034 [2024-07-14 20:26:54.035919] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:05.034 [2024-07-14 20:26:54.035947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:19876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.034 [2024-07-14 20:26:54.035958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.034 [2024-07-14 20:26:54.046708] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:05.034 [2024-07-14 20:26:54.046736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:8919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.034 [2024-07-14 20:26:54.046747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.034 [2024-07-14 20:26:54.057299] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:05.034 [2024-07-14 20:26:54.057328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:22396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.034 [2024-07-14 20:26:54.057339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.034 [2024-07-14 20:26:54.067292] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:05.034 [2024-07-14 20:26:54.067337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:1695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.034 [2024-07-14 20:26:54.067348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.034 [2024-07-14 20:26:54.077755] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:05.034 [2024-07-14 20:26:54.077784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.034 [2024-07-14 20:26:54.077794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.034 [2024-07-14 20:26:54.088185] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:05.034 [2024-07-14 20:26:54.088212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:10850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.034 [2024-07-14 20:26:54.088223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.034 [2024-07-14 20:26:54.099317] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:05.034 [2024-07-14 20:26:54.099362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:22817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.034 [2024-07-14 20:26:54.099372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.034 [2024-07-14 20:26:54.109453] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:05.034 [2024-07-14 20:26:54.109481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.034 [2024-07-14 20:26:54.109492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.294 [2024-07-14 20:26:54.121653] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:05.294 [2024-07-14 20:26:54.121683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:2517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.294 [2024-07-14 20:26:54.121695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.294 [2024-07-14 20:26:54.133111] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:05.294 [2024-07-14 20:26:54.133139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:13728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.294 [2024-07-14 20:26:54.133151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.294 [2024-07-14 20:26:54.144437] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:05.294 [2024-07-14 20:26:54.144465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.294 [2024-07-14 20:26:54.144476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.294 [2024-07-14 20:26:54.153289] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:05.294 [2024-07-14 20:26:54.153318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:1195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.294 [2024-07-14 20:26:54.153329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.294 [2024-07-14 20:26:54.164715] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:05.294 [2024-07-14 20:26:54.164743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.294 [2024-07-14 20:26:54.164754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.294 [2024-07-14 20:26:54.175696] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:05.294 [2024-07-14 20:26:54.175724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.294 [2024-07-14 20:26:54.175735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.294 [2024-07-14 20:26:54.185415] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:05.294 [2024-07-14 20:26:54.185443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:18607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.294 [2024-07-14 20:26:54.185454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.294 [2024-07-14 20:26:54.197175] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:05.294 [2024-07-14 20:26:54.197203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.294 [2024-07-14 20:26:54.197214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.294 [2024-07-14 20:26:54.208437] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:05.294 [2024-07-14 20:26:54.208465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:11849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.294 [2024-07-14 20:26:54.208476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.294 [2024-07-14 20:26:54.219990] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:05.294 [2024-07-14 20:26:54.220018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:16626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.294 [2024-07-14 20:26:54.220028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.294 [2024-07-14 20:26:54.230119] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:05.294 [2024-07-14 20:26:54.230147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:19333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.294 [2024-07-14 20:26:54.230158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.294 [2024-07-14 20:26:54.241448] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:05.294 [2024-07-14 20:26:54.241478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.294 [2024-07-14 20:26:54.241489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.294 [2024-07-14 20:26:54.250969] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:05.294 [2024-07-14 20:26:54.250997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:9013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.294 [2024-07-14 20:26:54.251009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.294 [2024-07-14 20:26:54.261786] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:05.294 [2024-07-14 20:26:54.261814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.294 [2024-07-14 20:26:54.261824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.294 [2024-07-14 20:26:54.272410] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:05.294 [2024-07-14 20:26:54.272439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.294 [2024-07-14 20:26:54.272450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.294 [2024-07-14 20:26:54.282022] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:05.294 [2024-07-14 20:26:54.282049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.294 [2024-07-14 20:26:54.282060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.294 [2024-07-14 20:26:54.293967] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:05.294 [2024-07-14 20:26:54.293994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:10108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.294 [2024-07-14 20:26:54.294005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.294 [2024-07-14 20:26:54.304545] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:05.294 [2024-07-14 20:26:54.304575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:10281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.294 [2024-07-14 20:26:54.304586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.294 [2024-07-14 20:26:54.314167] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:05.294 [2024-07-14 20:26:54.314195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.294 [2024-07-14 20:26:54.314205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.294 [2024-07-14 20:26:54.325687] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:05.294 [2024-07-14 20:26:54.325715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:4904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.294 [2024-07-14 20:26:54.325725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.294 [2024-07-14 20:26:54.336547] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:05.294 [2024-07-14 20:26:54.336575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:7398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.294 [2024-07-14 20:26:54.336586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.294 [2024-07-14 20:26:54.347936] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:05.294 [2024-07-14 20:26:54.347966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:11704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.294 [2024-07-14 20:26:54.347982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.294 [2024-07-14 20:26:54.359025] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:05.294 [2024-07-14 20:26:54.359054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:4175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.294 [2024-07-14 20:26:54.359066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.294 [2024-07-14 20:26:54.367238] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:05.294 [2024-07-14 20:26:54.367267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:21409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.294 [2024-07-14 20:26:54.367278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.553 [2024-07-14 20:26:54.378791] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:05.553 [2024-07-14 20:26:54.378821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:12522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.553 [2024-07-14 20:26:54.378833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.553 [2024-07-14 20:26:54.392908] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:05.553 [2024-07-14 20:26:54.392937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:21919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.553 [2024-07-14 20:26:54.392948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.553 [2024-07-14 20:26:54.401553] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:05.553 [2024-07-14 20:26:54.401583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:16616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.553 [2024-07-14 20:26:54.401594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.553 [2024-07-14 20:26:54.412678] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:05.553 [2024-07-14 20:26:54.412708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:2768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.553 [2024-07-14 20:26:54.412720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.553 [2024-07-14 20:26:54.424757] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:05.553 [2024-07-14 20:26:54.424786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:25264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.553 [2024-07-14 20:26:54.424797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.553 [2024-07-14 20:26:54.434333] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:05.553 [2024-07-14 20:26:54.434363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:13602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.553 [2024-07-14 20:26:54.434373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.553 [2024-07-14 20:26:54.444330] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23fa6d0) 00:27:05.553 [2024-07-14 20:26:54.444360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:18648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.553 [2024-07-14 20:26:54.444372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.553 00:27:05.553 Latency(us) 00:27:05.553 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:05.553 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:05.553 nvme0n1 : 2.00 23296.11 91.00 0.00 0.00 5488.17 3053.38 18469.24 00:27:05.553 =================================================================================================================== 00:27:05.553 Total : 23296.11 91.00 0.00 0.00 5488.17 3053.38 18469.24 00:27:05.553 0 00:27:05.553 20:26:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:05.553 20:26:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:05.553 | .driver_specific 00:27:05.553 | .nvme_error 00:27:05.554 | .status_code 00:27:05.554 | .command_transient_transport_error' 00:27:05.554 20:26:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:05.554 20:26:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:05.813 20:26:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 183 > 0 )) 00:27:05.813 20:26:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 112174 00:27:05.813 20:26:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 112174 ']' 00:27:05.813 20:26:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 112174 00:27:05.813 20:26:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:27:05.813 20:26:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:05.813 20:26:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 112174 00:27:05.813 20:26:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:05.813 20:26:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:05.813 killing process with pid 112174 00:27:05.813 20:26:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 112174' 00:27:05.813 20:26:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 112174 00:27:05.813 Received shutdown signal, test time was about 2.000000 seconds 00:27:05.813 00:27:05.813 Latency(us) 00:27:05.813 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:05.813 =================================================================================================================== 00:27:05.813 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:05.813 20:26:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 112174 00:27:06.072 20:26:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:27:06.072 20:26:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:06.072 20:26:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:27:06.072 20:26:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:27:06.072 20:26:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:27:06.072 20:26:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:27:06.072 20:26:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=112265 00:27:06.072 20:26:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 112265 /var/tmp/bperf.sock 00:27:06.072 20:26:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 112265 ']' 00:27:06.072 20:26:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:06.072 20:26:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:06.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:06.072 20:26:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:06.072 20:26:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:06.072 20:26:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:06.072 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:06.072 Zero copy mechanism will not be used. 00:27:06.072 [2024-07-14 20:26:55.048526] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:27:06.072 [2024-07-14 20:26:55.048603] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112265 ] 00:27:06.331 [2024-07-14 20:26:55.180119] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:06.331 [2024-07-14 20:26:55.275657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:07.267 20:26:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:07.267 20:26:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:27:07.267 20:26:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:07.267 20:26:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:07.267 20:26:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:07.267 20:26:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.267 20:26:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:07.267 20:26:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.267 20:26:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:07.267 20:26:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:07.524 nvme0n1 00:27:07.524 20:26:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:07.524 20:26:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.524 20:26:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:07.524 20:26:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.524 20:26:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:07.524 20:26:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:07.785 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:07.785 Zero copy mechanism will not be used. 00:27:07.785 Running I/O for 2 seconds... 00:27:07.785 [2024-07-14 20:26:56.665153] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:07.785 [2024-07-14 20:26:56.665220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.785 [2024-07-14 20:26:56.665234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.785 [2024-07-14 20:26:56.668876] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:07.785 [2024-07-14 20:26:56.668908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.785 [2024-07-14 20:26:56.668920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.785 [2024-07-14 20:26:56.673320] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:07.785 [2024-07-14 20:26:56.673352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.785 [2024-07-14 20:26:56.673364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.785 [2024-07-14 20:26:56.677527] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:07.785 [2024-07-14 20:26:56.677558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.785 [2024-07-14 20:26:56.677570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.785 [2024-07-14 20:26:56.681625] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:07.785 [2024-07-14 20:26:56.681656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.785 [2024-07-14 20:26:56.681668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.785 [2024-07-14 20:26:56.684537] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:07.785 [2024-07-14 20:26:56.684567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.785 [2024-07-14 20:26:56.684579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.785 [2024-07-14 20:26:56.688159] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:07.785 [2024-07-14 20:26:56.688190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.785 [2024-07-14 20:26:56.688202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.785 [2024-07-14 20:26:56.691763] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:07.785 [2024-07-14 20:26:56.691794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.785 [2024-07-14 20:26:56.691805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.785 [2024-07-14 20:26:56.694450] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:07.785 [2024-07-14 20:26:56.694480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.785 [2024-07-14 20:26:56.694491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.785 [2024-07-14 20:26:56.698648] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:07.785 [2024-07-14 20:26:56.698679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.785 [2024-07-14 20:26:56.698691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.785 [2024-07-14 20:26:56.702387] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:07.785 [2024-07-14 20:26:56.702419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.785 [2024-07-14 20:26:56.702430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.785 [2024-07-14 20:26:56.705883] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:07.785 [2024-07-14 20:26:56.705913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.785 [2024-07-14 20:26:56.705924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.785 [2024-07-14 20:26:56.709508] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:07.785 [2024-07-14 20:26:56.709540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.785 [2024-07-14 20:26:56.709552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.785 [2024-07-14 20:26:56.712657] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:07.785 [2024-07-14 20:26:56.712687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.785 [2024-07-14 20:26:56.712698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.785 [2024-07-14 20:26:56.716223] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:07.785 [2024-07-14 20:26:56.716253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.785 [2024-07-14 20:26:56.716270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.785 [2024-07-14 20:26:56.719903] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:07.785 [2024-07-14 20:26:56.719933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.785 [2024-07-14 20:26:56.719945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.785 [2024-07-14 20:26:56.722663] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:07.785 [2024-07-14 20:26:56.722694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.785 [2024-07-14 20:26:56.722705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.785 [2024-07-14 20:26:56.726869] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:07.785 [2024-07-14 20:26:56.726899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.785 [2024-07-14 20:26:56.726910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.785 [2024-07-14 20:26:56.730981] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:07.785 [2024-07-14 20:26:56.731012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.785 [2024-07-14 20:26:56.731028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.785 [2024-07-14 20:26:56.733752] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:07.785 [2024-07-14 20:26:56.733783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.785 [2024-07-14 20:26:56.733794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.785 [2024-07-14 20:26:56.736956] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:07.785 [2024-07-14 20:26:56.736986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.785 [2024-07-14 20:26:56.736997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.785 [2024-07-14 20:26:56.740814] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:07.785 [2024-07-14 20:26:56.740845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.785 [2024-07-14 20:26:56.740868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.785 [2024-07-14 20:26:56.744480] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:07.785 [2024-07-14 20:26:56.744512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.785 [2024-07-14 20:26:56.744524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.785 [2024-07-14 20:26:56.747644] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:07.786 [2024-07-14 20:26:56.747676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.786 [2024-07-14 20:26:56.747688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.786 [2024-07-14 20:26:56.751830] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:07.786 [2024-07-14 20:26:56.751878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.786 [2024-07-14 20:26:56.751890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.786 [2024-07-14 20:26:56.754696] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:07.786 [2024-07-14 20:26:56.754726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.786 [2024-07-14 20:26:56.754738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.786 [2024-07-14 20:26:56.758502] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:07.786 [2024-07-14 20:26:56.758533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.786 [2024-07-14 20:26:56.758545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.786 [2024-07-14 20:26:56.762371] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:07.786 [2024-07-14 20:26:56.762401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.786 [2024-07-14 20:26:56.762413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.786 [2024-07-14 20:26:56.766102] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:07.786 [2024-07-14 20:26:56.766133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.786 [2024-07-14 20:26:56.766145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.786 [2024-07-14 20:26:56.769170] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:07.786 [2024-07-14 20:26:56.769201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.786 [2024-07-14 20:26:56.769213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.786 [2024-07-14 20:26:56.773072] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:07.786 [2024-07-14 20:26:56.773102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.786 [2024-07-14 20:26:56.773113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.786 [2024-07-14 20:26:56.776732] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:07.786 [2024-07-14 20:26:56.776763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.786 [2024-07-14 20:26:56.776774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.786 [2024-07-14 20:26:56.779698] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:07.786 [2024-07-14 20:26:56.779729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.786 [2024-07-14 20:26:56.779740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.786 [2024-07-14 20:26:56.782909] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:07.786 [2024-07-14 20:26:56.782965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.786 [2024-07-14 20:26:56.782979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.786 [2024-07-14 20:26:56.786224] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:07.786 [2024-07-14 20:26:56.786254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.786 [2024-07-14 20:26:56.786265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.786 [2024-07-14 20:26:56.789216] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:07.786 [2024-07-14 20:26:56.789246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.786 [2024-07-14 20:26:56.789257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.786 [2024-07-14 20:26:56.792407] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:07.786 [2024-07-14 20:26:56.792437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.786 [2024-07-14 20:26:56.792449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.786 [2024-07-14 20:26:56.796011] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:07.786 [2024-07-14 20:26:56.796042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.786 [2024-07-14 20:26:56.796053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.786 [2024-07-14 20:26:56.798512] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:07.786 [2024-07-14 20:26:56.798541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.786 [2024-07-14 20:26:56.798553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.786 [2024-07-14 20:26:56.802722] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:07.786 [2024-07-14 20:26:56.802754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.786 [2024-07-14 20:26:56.802765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.786 [2024-07-14 20:26:56.805494] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:07.786 [2024-07-14 20:26:56.805524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.786 [2024-07-14 20:26:56.805535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.786 [2024-07-14 20:26:56.809054] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:07.786 [2024-07-14 20:26:56.809083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.786 [2024-07-14 20:26:56.809094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.786 [2024-07-14 20:26:56.813072] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:07.786 [2024-07-14 20:26:56.813102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.786 [2024-07-14 20:26:56.813114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.786 [2024-07-14 20:26:56.815687] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:07.786 [2024-07-14 20:26:56.815717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.786 [2024-07-14 20:26:56.815729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.786 [2024-07-14 20:26:56.818987] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:07.786 [2024-07-14 20:26:56.819019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.786 [2024-07-14 20:26:56.819030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.786 [2024-07-14 20:26:56.822892] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:07.786 [2024-07-14 20:26:56.822945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.786 [2024-07-14 20:26:56.822963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.786 [2024-07-14 20:26:56.826382] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:07.786 [2024-07-14 20:26:56.826412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.786 [2024-07-14 20:26:56.826423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.786 [2024-07-14 20:26:56.829788] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:07.786 [2024-07-14 20:26:56.829818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.786 [2024-07-14 20:26:56.829829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.786 [2024-07-14 20:26:56.832975] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:07.786 [2024-07-14 20:26:56.833006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.786 [2024-07-14 20:26:56.833017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.786 [2024-07-14 20:26:56.836800] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:07.786 [2024-07-14 20:26:56.836831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.786 [2024-07-14 20:26:56.836843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.786 [2024-07-14 20:26:56.840727] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:07.786 [2024-07-14 20:26:56.840758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.786 [2024-07-14 20:26:56.840769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.786 [2024-07-14 20:26:56.843592] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:07.786 [2024-07-14 20:26:56.843624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.786 [2024-07-14 20:26:56.843636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.786 [2024-07-14 20:26:56.847097] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:07.787 [2024-07-14 20:26:56.847128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.787 [2024-07-14 20:26:56.847139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.787 [2024-07-14 20:26:56.850643] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:07.787 [2024-07-14 20:26:56.850673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.787 [2024-07-14 20:26:56.850685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.787 [2024-07-14 20:26:56.854218] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:07.787 [2024-07-14 20:26:56.854248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.787 [2024-07-14 20:26:56.854259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.787 [2024-07-14 20:26:56.857078] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:07.787 [2024-07-14 20:26:56.857108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.787 [2024-07-14 20:26:56.857120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.787 [2024-07-14 20:26:56.860341] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:07.787 [2024-07-14 20:26:56.860372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.787 [2024-07-14 20:26:56.860384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.787 [2024-07-14 20:26:56.864111] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:07.787 [2024-07-14 20:26:56.864143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.787 [2024-07-14 20:26:56.864155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.787 [2024-07-14 20:26:56.867283] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:07.787 [2024-07-14 20:26:56.867327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.787 [2024-07-14 20:26:56.867339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.048 [2024-07-14 20:26:56.871153] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.048 [2024-07-14 20:26:56.871187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.048 [2024-07-14 20:26:56.871199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.048 [2024-07-14 20:26:56.874651] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.048 [2024-07-14 20:26:56.874694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.048 [2024-07-14 20:26:56.874706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.048 [2024-07-14 20:26:56.878590] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.048 [2024-07-14 20:26:56.878623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.048 [2024-07-14 20:26:56.878634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.048 [2024-07-14 20:26:56.882080] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.048 [2024-07-14 20:26:56.882113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.048 [2024-07-14 20:26:56.882125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.048 [2024-07-14 20:26:56.885792] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.048 [2024-07-14 20:26:56.885823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.048 [2024-07-14 20:26:56.885835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.048 [2024-07-14 20:26:56.889429] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.048 [2024-07-14 20:26:56.889460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.048 [2024-07-14 20:26:56.889471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.048 [2024-07-14 20:26:56.892670] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.048 [2024-07-14 20:26:56.892701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.048 [2024-07-14 20:26:56.892712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.048 [2024-07-14 20:26:56.896201] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.048 [2024-07-14 20:26:56.896247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.048 [2024-07-14 20:26:56.896259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.048 [2024-07-14 20:26:56.900053] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.048 [2024-07-14 20:26:56.900084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.048 [2024-07-14 20:26:56.900095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.048 [2024-07-14 20:26:56.902833] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.048 [2024-07-14 20:26:56.902901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.048 [2024-07-14 20:26:56.902958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.048 [2024-07-14 20:26:56.906842] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.048 [2024-07-14 20:26:56.906898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.048 [2024-07-14 20:26:56.906919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.048 [2024-07-14 20:26:56.910333] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.048 [2024-07-14 20:26:56.910364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.048 [2024-07-14 20:26:56.910376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.048 [2024-07-14 20:26:56.914247] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.048 [2024-07-14 20:26:56.914278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.048 [2024-07-14 20:26:56.914290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.048 [2024-07-14 20:26:56.917465] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.048 [2024-07-14 20:26:56.917497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.048 [2024-07-14 20:26:56.917508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.048 [2024-07-14 20:26:56.920681] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.048 [2024-07-14 20:26:56.920712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.048 [2024-07-14 20:26:56.920723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.048 [2024-07-14 20:26:56.924176] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.048 [2024-07-14 20:26:56.924206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.048 [2024-07-14 20:26:56.924218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.048 [2024-07-14 20:26:56.927296] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.048 [2024-07-14 20:26:56.927327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.048 [2024-07-14 20:26:56.927339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.048 [2024-07-14 20:26:56.930660] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.048 [2024-07-14 20:26:56.930690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.048 [2024-07-14 20:26:56.930701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.048 [2024-07-14 20:26:56.933968] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.048 [2024-07-14 20:26:56.933998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.048 [2024-07-14 20:26:56.934009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.048 [2024-07-14 20:26:56.937732] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.048 [2024-07-14 20:26:56.937762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.048 [2024-07-14 20:26:56.937774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.048 [2024-07-14 20:26:56.941542] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.048 [2024-07-14 20:26:56.941572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.048 [2024-07-14 20:26:56.941584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.048 [2024-07-14 20:26:56.944982] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.048 [2024-07-14 20:26:56.945013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.048 [2024-07-14 20:26:56.945024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.048 [2024-07-14 20:26:56.948628] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.048 [2024-07-14 20:26:56.948658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.048 [2024-07-14 20:26:56.948669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.048 [2024-07-14 20:26:56.952110] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.048 [2024-07-14 20:26:56.952140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.048 [2024-07-14 20:26:56.952151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.048 [2024-07-14 20:26:56.955248] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.048 [2024-07-14 20:26:56.955280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.048 [2024-07-14 20:26:56.955292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.048 [2024-07-14 20:26:56.958969] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.048 [2024-07-14 20:26:56.959000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.048 [2024-07-14 20:26:56.959012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.048 [2024-07-14 20:26:56.961838] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.048 [2024-07-14 20:26:56.961877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.048 [2024-07-14 20:26:56.961889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.048 [2024-07-14 20:26:56.965906] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.048 [2024-07-14 20:26:56.965935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.048 [2024-07-14 20:26:56.965949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.049 [2024-07-14 20:26:56.969595] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.049 [2024-07-14 20:26:56.969625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.049 [2024-07-14 20:26:56.969638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.049 [2024-07-14 20:26:56.971937] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.049 [2024-07-14 20:26:56.971966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.049 [2024-07-14 20:26:56.971982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.049 [2024-07-14 20:26:56.976090] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.049 [2024-07-14 20:26:56.976121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.049 [2024-07-14 20:26:56.976133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.049 [2024-07-14 20:26:56.979468] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.049 [2024-07-14 20:26:56.979502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.049 [2024-07-14 20:26:56.979514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.049 [2024-07-14 20:26:56.982441] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.049 [2024-07-14 20:26:56.982472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.049 [2024-07-14 20:26:56.982484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.049 [2024-07-14 20:26:56.986288] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.049 [2024-07-14 20:26:56.986319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.049 [2024-07-14 20:26:56.986330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.049 [2024-07-14 20:26:56.990573] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.049 [2024-07-14 20:26:56.990604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.049 [2024-07-14 20:26:56.990617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.049 [2024-07-14 20:26:56.993253] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.049 [2024-07-14 20:26:56.993282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.049 [2024-07-14 20:26:56.993293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.049 [2024-07-14 20:26:56.996663] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.049 [2024-07-14 20:26:56.996694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.049 [2024-07-14 20:26:56.996706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.049 [2024-07-14 20:26:57.000474] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.049 [2024-07-14 20:26:57.000505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.049 [2024-07-14 20:26:57.000516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.049 [2024-07-14 20:26:57.003255] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.049 [2024-07-14 20:26:57.003286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.049 [2024-07-14 20:26:57.003297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.049 [2024-07-14 20:26:57.006526] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.049 [2024-07-14 20:26:57.006557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.049 [2024-07-14 20:26:57.006568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.049 [2024-07-14 20:26:57.009911] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.049 [2024-07-14 20:26:57.009941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.049 [2024-07-14 20:26:57.009952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.049 [2024-07-14 20:26:57.013210] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.049 [2024-07-14 20:26:57.013240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.049 [2024-07-14 20:26:57.013252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.049 [2024-07-14 20:26:57.016193] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.049 [2024-07-14 20:26:57.016223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.049 [2024-07-14 20:26:57.016234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.049 [2024-07-14 20:26:57.020066] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.049 [2024-07-14 20:26:57.020097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.049 [2024-07-14 20:26:57.020108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.049 [2024-07-14 20:26:57.023628] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.049 [2024-07-14 20:26:57.023658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.049 [2024-07-14 20:26:57.023669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.049 [2024-07-14 20:26:57.026276] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.049 [2024-07-14 20:26:57.026306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.049 [2024-07-14 20:26:57.026318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.049 [2024-07-14 20:26:57.030244] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.049 [2024-07-14 20:26:57.030276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.049 [2024-07-14 20:26:57.030287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.049 [2024-07-14 20:26:57.034529] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.049 [2024-07-14 20:26:57.034560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.049 [2024-07-14 20:26:57.034571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.049 [2024-07-14 20:26:57.038452] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.049 [2024-07-14 20:26:57.038484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.049 [2024-07-14 20:26:57.038495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.049 [2024-07-14 20:26:57.040768] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.049 [2024-07-14 20:26:57.040797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.049 [2024-07-14 20:26:57.040808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.049 [2024-07-14 20:26:57.044869] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.049 [2024-07-14 20:26:57.044899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.049 [2024-07-14 20:26:57.044910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.049 [2024-07-14 20:26:57.048713] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.049 [2024-07-14 20:26:57.048744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.049 [2024-07-14 20:26:57.048755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.049 [2024-07-14 20:26:57.051651] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.049 [2024-07-14 20:26:57.051681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.049 [2024-07-14 20:26:57.051693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.049 [2024-07-14 20:26:57.055105] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.049 [2024-07-14 20:26:57.055136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.049 [2024-07-14 20:26:57.055149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.049 [2024-07-14 20:26:57.058683] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.049 [2024-07-14 20:26:57.058714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.049 [2024-07-14 20:26:57.058726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.049 [2024-07-14 20:26:57.062416] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.049 [2024-07-14 20:26:57.062447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.049 [2024-07-14 20:26:57.062458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.049 [2024-07-14 20:26:57.065660] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.049 [2024-07-14 20:26:57.065692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.050 [2024-07-14 20:26:57.065703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.050 [2024-07-14 20:26:57.068834] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.050 [2024-07-14 20:26:57.068875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.050 [2024-07-14 20:26:57.068886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.050 [2024-07-14 20:26:57.072530] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.050 [2024-07-14 20:26:57.072561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.050 [2024-07-14 20:26:57.072573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.050 [2024-07-14 20:26:57.076096] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.050 [2024-07-14 20:26:57.076126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.050 [2024-07-14 20:26:57.076138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.050 [2024-07-14 20:26:57.079094] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.050 [2024-07-14 20:26:57.079125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.050 [2024-07-14 20:26:57.079137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.050 [2024-07-14 20:26:57.082654] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.050 [2024-07-14 20:26:57.082685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.050 [2024-07-14 20:26:57.082697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.050 [2024-07-14 20:26:57.085792] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.050 [2024-07-14 20:26:57.085823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.050 [2024-07-14 20:26:57.085834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.050 [2024-07-14 20:26:57.088484] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.050 [2024-07-14 20:26:57.088515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.050 [2024-07-14 20:26:57.088527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.050 [2024-07-14 20:26:57.092547] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.050 [2024-07-14 20:26:57.092579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.050 [2024-07-14 20:26:57.092592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.050 [2024-07-14 20:26:57.096028] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.050 [2024-07-14 20:26:57.096060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.050 [2024-07-14 20:26:57.096072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.050 [2024-07-14 20:26:57.099341] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.050 [2024-07-14 20:26:57.099388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.050 [2024-07-14 20:26:57.099400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.050 [2024-07-14 20:26:57.102982] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.050 [2024-07-14 20:26:57.103013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.050 [2024-07-14 20:26:57.103026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.050 [2024-07-14 20:26:57.106851] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.050 [2024-07-14 20:26:57.106943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.050 [2024-07-14 20:26:57.106974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.050 [2024-07-14 20:26:57.110287] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.050 [2024-07-14 20:26:57.110317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.050 [2024-07-14 20:26:57.110331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.050 [2024-07-14 20:26:57.114373] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.050 [2024-07-14 20:26:57.114403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.050 [2024-07-14 20:26:57.114415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.050 [2024-07-14 20:26:57.117552] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.050 [2024-07-14 20:26:57.117583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.050 [2024-07-14 20:26:57.117594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.050 [2024-07-14 20:26:57.121072] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.050 [2024-07-14 20:26:57.121103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.050 [2024-07-14 20:26:57.121115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.050 [2024-07-14 20:26:57.124131] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.050 [2024-07-14 20:26:57.124160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.050 [2024-07-14 20:26:57.124171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.050 [2024-07-14 20:26:57.127620] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.050 [2024-07-14 20:26:57.127650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.050 [2024-07-14 20:26:57.127663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.310 [2024-07-14 20:26:57.131406] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.310 [2024-07-14 20:26:57.131438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.310 [2024-07-14 20:26:57.131450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.310 [2024-07-14 20:26:57.135083] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.310 [2024-07-14 20:26:57.135115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.310 [2024-07-14 20:26:57.135135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.310 [2024-07-14 20:26:57.139033] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.310 [2024-07-14 20:26:57.139065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.310 [2024-07-14 20:26:57.139078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.310 [2024-07-14 20:26:57.141970] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.310 [2024-07-14 20:26:57.141999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.310 [2024-07-14 20:26:57.142016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.311 [2024-07-14 20:26:57.145681] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.311 [2024-07-14 20:26:57.145711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.311 [2024-07-14 20:26:57.145723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.311 [2024-07-14 20:26:57.149019] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.311 [2024-07-14 20:26:57.149049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.311 [2024-07-14 20:26:57.149061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.311 [2024-07-14 20:26:57.152700] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.311 [2024-07-14 20:26:57.152730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.311 [2024-07-14 20:26:57.152742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.311 [2024-07-14 20:26:57.155715] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.311 [2024-07-14 20:26:57.155745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.311 [2024-07-14 20:26:57.155759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.311 [2024-07-14 20:26:57.159517] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.311 [2024-07-14 20:26:57.159548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.311 [2024-07-14 20:26:57.159560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.311 [2024-07-14 20:26:57.163477] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.311 [2024-07-14 20:26:57.163507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.311 [2024-07-14 20:26:57.163519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.311 [2024-07-14 20:26:57.167738] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.311 [2024-07-14 20:26:57.167769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.311 [2024-07-14 20:26:57.167782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.311 [2024-07-14 20:26:57.171320] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.311 [2024-07-14 20:26:57.171375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.311 [2024-07-14 20:26:57.171386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.311 [2024-07-14 20:26:57.173826] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.311 [2024-07-14 20:26:57.173866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.311 [2024-07-14 20:26:57.173879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.311 [2024-07-14 20:26:57.178104] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.311 [2024-07-14 20:26:57.178134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.311 [2024-07-14 20:26:57.178146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.311 [2024-07-14 20:26:57.182450] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.311 [2024-07-14 20:26:57.182482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.311 [2024-07-14 20:26:57.182493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.311 [2024-07-14 20:26:57.186722] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.311 [2024-07-14 20:26:57.186753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.311 [2024-07-14 20:26:57.186765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.311 [2024-07-14 20:26:57.190473] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.311 [2024-07-14 20:26:57.190504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.311 [2024-07-14 20:26:57.190519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.311 [2024-07-14 20:26:57.193185] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.311 [2024-07-14 20:26:57.193215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.311 [2024-07-14 20:26:57.193227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.311 [2024-07-14 20:26:57.197056] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.311 [2024-07-14 20:26:57.197100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.311 [2024-07-14 20:26:57.197111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.311 [2024-07-14 20:26:57.200834] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.311 [2024-07-14 20:26:57.200874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.311 [2024-07-14 20:26:57.200886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.311 [2024-07-14 20:26:57.203984] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.311 [2024-07-14 20:26:57.204014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.311 [2024-07-14 20:26:57.204026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.311 [2024-07-14 20:26:57.207506] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.311 [2024-07-14 20:26:57.207536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.311 [2024-07-14 20:26:57.207548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.311 [2024-07-14 20:26:57.210212] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.311 [2024-07-14 20:26:57.210241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.311 [2024-07-14 20:26:57.210254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.311 [2024-07-14 20:26:57.213692] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.311 [2024-07-14 20:26:57.213722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.311 [2024-07-14 20:26:57.213733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.311 [2024-07-14 20:26:57.217338] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.311 [2024-07-14 20:26:57.217368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.311 [2024-07-14 20:26:57.217379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.311 [2024-07-14 20:26:57.221432] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.311 [2024-07-14 20:26:57.221463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.311 [2024-07-14 20:26:57.221474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.311 [2024-07-14 20:26:57.225268] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.311 [2024-07-14 20:26:57.225298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.311 [2024-07-14 20:26:57.225310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.311 [2024-07-14 20:26:57.227619] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.311 [2024-07-14 20:26:57.227649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.311 [2024-07-14 20:26:57.227660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.311 [2024-07-14 20:26:57.231465] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.311 [2024-07-14 20:26:57.231496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.311 [2024-07-14 20:26:57.231512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.311 [2024-07-14 20:26:57.234846] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.311 [2024-07-14 20:26:57.234886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.311 [2024-07-14 20:26:57.234898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.311 [2024-07-14 20:26:57.238353] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.311 [2024-07-14 20:26:57.238384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.311 [2024-07-14 20:26:57.238396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.311 [2024-07-14 20:26:57.241439] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.311 [2024-07-14 20:26:57.241469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.311 [2024-07-14 20:26:57.241481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.311 [2024-07-14 20:26:57.244844] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.311 [2024-07-14 20:26:57.244884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.311 [2024-07-14 20:26:57.244896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.312 [2024-07-14 20:26:57.248118] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.312 [2024-07-14 20:26:57.248148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.312 [2024-07-14 20:26:57.248159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.312 [2024-07-14 20:26:57.250822] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.312 [2024-07-14 20:26:57.250864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.312 [2024-07-14 20:26:57.250878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.312 [2024-07-14 20:26:57.254967] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.312 [2024-07-14 20:26:57.254997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.312 [2024-07-14 20:26:57.255009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.312 [2024-07-14 20:26:57.258160] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.312 [2024-07-14 20:26:57.258190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.312 [2024-07-14 20:26:57.258201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.312 [2024-07-14 20:26:57.262112] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.312 [2024-07-14 20:26:57.262143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.312 [2024-07-14 20:26:57.262154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.312 [2024-07-14 20:26:57.264656] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.312 [2024-07-14 20:26:57.264685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.312 [2024-07-14 20:26:57.264696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.312 [2024-07-14 20:26:57.268560] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.312 [2024-07-14 20:26:57.268592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.312 [2024-07-14 20:26:57.268604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.312 [2024-07-14 20:26:57.272583] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.312 [2024-07-14 20:26:57.272614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.312 [2024-07-14 20:26:57.272626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.312 [2024-07-14 20:26:57.276580] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.312 [2024-07-14 20:26:57.276609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.312 [2024-07-14 20:26:57.276621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.312 [2024-07-14 20:26:57.279452] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.312 [2024-07-14 20:26:57.279481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.312 [2024-07-14 20:26:57.279494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.312 [2024-07-14 20:26:57.283327] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.312 [2024-07-14 20:26:57.283372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.312 [2024-07-14 20:26:57.283384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.312 [2024-07-14 20:26:57.286462] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.312 [2024-07-14 20:26:57.286492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.312 [2024-07-14 20:26:57.286503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.312 [2024-07-14 20:26:57.290186] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.312 [2024-07-14 20:26:57.290216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.312 [2024-07-14 20:26:57.290228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.312 [2024-07-14 20:26:57.294084] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.312 [2024-07-14 20:26:57.294114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.312 [2024-07-14 20:26:57.294125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.312 [2024-07-14 20:26:57.297443] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.312 [2024-07-14 20:26:57.297474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.312 [2024-07-14 20:26:57.297485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.312 [2024-07-14 20:26:57.300204] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.312 [2024-07-14 20:26:57.300234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.312 [2024-07-14 20:26:57.300245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.312 [2024-07-14 20:26:57.304147] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.312 [2024-07-14 20:26:57.304176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.312 [2024-07-14 20:26:57.304188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.312 [2024-07-14 20:26:57.307766] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.312 [2024-07-14 20:26:57.307797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.312 [2024-07-14 20:26:57.307808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.312 [2024-07-14 20:26:57.310446] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.312 [2024-07-14 20:26:57.310475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.312 [2024-07-14 20:26:57.310487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.312 [2024-07-14 20:26:57.313990] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.312 [2024-07-14 20:26:57.314020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.312 [2024-07-14 20:26:57.314032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.312 [2024-07-14 20:26:57.317506] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.312 [2024-07-14 20:26:57.317537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.312 [2024-07-14 20:26:57.317548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.312 [2024-07-14 20:26:57.321046] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.312 [2024-07-14 20:26:57.321075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.312 [2024-07-14 20:26:57.321087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.312 [2024-07-14 20:26:57.324038] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.312 [2024-07-14 20:26:57.324068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.312 [2024-07-14 20:26:57.324080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.312 [2024-07-14 20:26:57.327160] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.312 [2024-07-14 20:26:57.327190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.312 [2024-07-14 20:26:57.327202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.312 [2024-07-14 20:26:57.330158] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.312 [2024-07-14 20:26:57.330188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.312 [2024-07-14 20:26:57.330199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.312 [2024-07-14 20:26:57.333916] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.312 [2024-07-14 20:26:57.333946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.312 [2024-07-14 20:26:57.333957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.312 [2024-07-14 20:26:57.337904] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.312 [2024-07-14 20:26:57.337933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.312 [2024-07-14 20:26:57.337945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.312 [2024-07-14 20:26:57.340833] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.312 [2024-07-14 20:26:57.340893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.312 [2024-07-14 20:26:57.340906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.312 [2024-07-14 20:26:57.344794] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.312 [2024-07-14 20:26:57.344824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.312 [2024-07-14 20:26:57.344836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.313 [2024-07-14 20:26:57.348170] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.313 [2024-07-14 20:26:57.348201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.313 [2024-07-14 20:26:57.348212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.313 [2024-07-14 20:26:57.350891] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.313 [2024-07-14 20:26:57.350943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.313 [2024-07-14 20:26:57.350971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.313 [2024-07-14 20:26:57.354729] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.313 [2024-07-14 20:26:57.354759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.313 [2024-07-14 20:26:57.354770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.313 [2024-07-14 20:26:57.358685] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.313 [2024-07-14 20:26:57.358715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.313 [2024-07-14 20:26:57.358727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.313 [2024-07-14 20:26:57.361644] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.313 [2024-07-14 20:26:57.361674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.313 [2024-07-14 20:26:57.361686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.313 [2024-07-14 20:26:57.365119] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.313 [2024-07-14 20:26:57.365149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.313 [2024-07-14 20:26:57.365161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.313 [2024-07-14 20:26:57.368893] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.313 [2024-07-14 20:26:57.368920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.313 [2024-07-14 20:26:57.368931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.313 [2024-07-14 20:26:57.372084] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.313 [2024-07-14 20:26:57.372114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.313 [2024-07-14 20:26:57.372125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.313 [2024-07-14 20:26:57.375219] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.313 [2024-07-14 20:26:57.375250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.313 [2024-07-14 20:26:57.375284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.313 [2024-07-14 20:26:57.379203] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.313 [2024-07-14 20:26:57.379233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.313 [2024-07-14 20:26:57.379245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.313 [2024-07-14 20:26:57.382403] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.313 [2024-07-14 20:26:57.382433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.313 [2024-07-14 20:26:57.382444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.313 [2024-07-14 20:26:57.385882] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.313 [2024-07-14 20:26:57.385911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.313 [2024-07-14 20:26:57.385923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.313 [2024-07-14 20:26:57.389786] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.313 [2024-07-14 20:26:57.389818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.313 [2024-07-14 20:26:57.389832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.573 [2024-07-14 20:26:57.394421] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.573 [2024-07-14 20:26:57.394452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.573 [2024-07-14 20:26:57.394464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.573 [2024-07-14 20:26:57.397217] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.573 [2024-07-14 20:26:57.397248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.573 [2024-07-14 20:26:57.397260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.573 [2024-07-14 20:26:57.401375] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.573 [2024-07-14 20:26:57.401421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.573 [2024-07-14 20:26:57.401433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.573 [2024-07-14 20:26:57.404410] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.573 [2024-07-14 20:26:57.404440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.573 [2024-07-14 20:26:57.404452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.573 [2024-07-14 20:26:57.407242] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.573 [2024-07-14 20:26:57.407302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.573 [2024-07-14 20:26:57.407315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.573 [2024-07-14 20:26:57.411384] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.573 [2024-07-14 20:26:57.411415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.573 [2024-07-14 20:26:57.411427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.573 [2024-07-14 20:26:57.415590] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.573 [2024-07-14 20:26:57.415629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.573 [2024-07-14 20:26:57.415648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.573 [2024-07-14 20:26:57.418674] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.573 [2024-07-14 20:26:57.418704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.573 [2024-07-14 20:26:57.418719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.574 [2024-07-14 20:26:57.421737] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.574 [2024-07-14 20:26:57.421768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.574 [2024-07-14 20:26:57.421783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.574 [2024-07-14 20:26:57.426194] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.574 [2024-07-14 20:26:57.426223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.574 [2024-07-14 20:26:57.426234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.574 [2024-07-14 20:26:57.430647] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.574 [2024-07-14 20:26:57.430677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.574 [2024-07-14 20:26:57.430688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.574 [2024-07-14 20:26:57.433457] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.574 [2024-07-14 20:26:57.433485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.574 [2024-07-14 20:26:57.433497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.574 [2024-07-14 20:26:57.437153] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.574 [2024-07-14 20:26:57.437184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.574 [2024-07-14 20:26:57.437196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.574 [2024-07-14 20:26:57.440874] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.574 [2024-07-14 20:26:57.440903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.574 [2024-07-14 20:26:57.440914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.574 [2024-07-14 20:26:57.443726] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.574 [2024-07-14 20:26:57.443756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.574 [2024-07-14 20:26:57.443767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.574 [2024-07-14 20:26:57.447217] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.574 [2024-07-14 20:26:57.447248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.574 [2024-07-14 20:26:57.447260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.574 [2024-07-14 20:26:57.450738] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.574 [2024-07-14 20:26:57.450768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.574 [2024-07-14 20:26:57.450780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.574 [2024-07-14 20:26:57.454809] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.574 [2024-07-14 20:26:57.454839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.574 [2024-07-14 20:26:57.454850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.574 [2024-07-14 20:26:57.458407] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.574 [2024-07-14 20:26:57.458437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.574 [2024-07-14 20:26:57.458449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.574 [2024-07-14 20:26:57.461880] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.574 [2024-07-14 20:26:57.461909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.574 [2024-07-14 20:26:57.461920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.574 [2024-07-14 20:26:57.465529] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.574 [2024-07-14 20:26:57.465558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.574 [2024-07-14 20:26:57.465569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.574 [2024-07-14 20:26:57.469249] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.574 [2024-07-14 20:26:57.469279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.574 [2024-07-14 20:26:57.469290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.574 [2024-07-14 20:26:57.472608] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.574 [2024-07-14 20:26:57.472637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.574 [2024-07-14 20:26:57.472649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.574 [2024-07-14 20:26:57.475940] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.574 [2024-07-14 20:26:57.475970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.574 [2024-07-14 20:26:57.475982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.574 [2024-07-14 20:26:57.479222] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.574 [2024-07-14 20:26:57.479253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.574 [2024-07-14 20:26:57.479279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.574 [2024-07-14 20:26:57.482694] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.574 [2024-07-14 20:26:57.482724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.574 [2024-07-14 20:26:57.482735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.574 [2024-07-14 20:26:57.486274] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.574 [2024-07-14 20:26:57.486304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.574 [2024-07-14 20:26:57.486315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.574 [2024-07-14 20:26:57.489632] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.574 [2024-07-14 20:26:57.489662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.574 [2024-07-14 20:26:57.489674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.574 [2024-07-14 20:26:57.492784] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.574 [2024-07-14 20:26:57.492813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.574 [2024-07-14 20:26:57.492825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.574 [2024-07-14 20:26:57.496581] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.574 [2024-07-14 20:26:57.496610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.574 [2024-07-14 20:26:57.496621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.574 [2024-07-14 20:26:57.499992] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.574 [2024-07-14 20:26:57.500022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.574 [2024-07-14 20:26:57.500034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.574 [2024-07-14 20:26:57.503177] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.574 [2024-07-14 20:26:57.503208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.574 [2024-07-14 20:26:57.503220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.574 [2024-07-14 20:26:57.506820] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.574 [2024-07-14 20:26:57.506850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.574 [2024-07-14 20:26:57.506873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.574 [2024-07-14 20:26:57.510699] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.574 [2024-07-14 20:26:57.510729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.574 [2024-07-14 20:26:57.510741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.574 [2024-07-14 20:26:57.513538] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.574 [2024-07-14 20:26:57.513568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.574 [2024-07-14 20:26:57.513580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.574 [2024-07-14 20:26:57.516946] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.574 [2024-07-14 20:26:57.516976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.574 [2024-07-14 20:26:57.516988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.574 [2024-07-14 20:26:57.520625] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.574 [2024-07-14 20:26:57.520655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.574 [2024-07-14 20:26:57.520667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.574 [2024-07-14 20:26:57.523980] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.575 [2024-07-14 20:26:57.524009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.575 [2024-07-14 20:26:57.524020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.575 [2024-07-14 20:26:57.527182] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.575 [2024-07-14 20:26:57.527213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.575 [2024-07-14 20:26:57.527225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.575 [2024-07-14 20:26:57.530488] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.575 [2024-07-14 20:26:57.530518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.575 [2024-07-14 20:26:57.530530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.575 [2024-07-14 20:26:57.534459] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.575 [2024-07-14 20:26:57.534490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.575 [2024-07-14 20:26:57.534501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.575 [2024-07-14 20:26:57.538221] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.575 [2024-07-14 20:26:57.538251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.575 [2024-07-14 20:26:57.538262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.575 [2024-07-14 20:26:57.540689] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.575 [2024-07-14 20:26:57.540719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.575 [2024-07-14 20:26:57.540730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.575 [2024-07-14 20:26:57.544387] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.575 [2024-07-14 20:26:57.544416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.575 [2024-07-14 20:26:57.544428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.575 [2024-07-14 20:26:57.548202] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.575 [2024-07-14 20:26:57.548232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.575 [2024-07-14 20:26:57.548243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.575 [2024-07-14 20:26:57.551249] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.575 [2024-07-14 20:26:57.551295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.575 [2024-07-14 20:26:57.551306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.575 [2024-07-14 20:26:57.554781] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.575 [2024-07-14 20:26:57.554811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.575 [2024-07-14 20:26:57.554822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.575 [2024-07-14 20:26:57.557646] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.575 [2024-07-14 20:26:57.557675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.575 [2024-07-14 20:26:57.557687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.575 [2024-07-14 20:26:57.561018] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.575 [2024-07-14 20:26:57.561049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.575 [2024-07-14 20:26:57.561060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.575 [2024-07-14 20:26:57.564901] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.575 [2024-07-14 20:26:57.564929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.575 [2024-07-14 20:26:57.564940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.575 [2024-07-14 20:26:57.567551] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.575 [2024-07-14 20:26:57.567581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.575 [2024-07-14 20:26:57.567592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.575 [2024-07-14 20:26:57.571019] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.575 [2024-07-14 20:26:57.571049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.575 [2024-07-14 20:26:57.571061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.575 [2024-07-14 20:26:57.574791] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.575 [2024-07-14 20:26:57.574821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.575 [2024-07-14 20:26:57.574832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.575 [2024-07-14 20:26:57.578030] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.575 [2024-07-14 20:26:57.578060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.575 [2024-07-14 20:26:57.578072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.575 [2024-07-14 20:26:57.581680] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.575 [2024-07-14 20:26:57.581709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.575 [2024-07-14 20:26:57.581721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.575 [2024-07-14 20:26:57.585501] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.575 [2024-07-14 20:26:57.585531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.575 [2024-07-14 20:26:57.585543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.575 [2024-07-14 20:26:57.587893] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.575 [2024-07-14 20:26:57.587930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.575 [2024-07-14 20:26:57.587942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.575 [2024-07-14 20:26:57.591691] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.575 [2024-07-14 20:26:57.591721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.575 [2024-07-14 20:26:57.591732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.575 [2024-07-14 20:26:57.595601] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.575 [2024-07-14 20:26:57.595632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.575 [2024-07-14 20:26:57.595644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.575 [2024-07-14 20:26:57.598788] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.575 [2024-07-14 20:26:57.598817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.575 [2024-07-14 20:26:57.598828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.575 [2024-07-14 20:26:57.601907] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.575 [2024-07-14 20:26:57.601937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.575 [2024-07-14 20:26:57.601948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.575 [2024-07-14 20:26:57.604890] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.575 [2024-07-14 20:26:57.604917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.575 [2024-07-14 20:26:57.604928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.575 [2024-07-14 20:26:57.608605] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.575 [2024-07-14 20:26:57.608635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.575 [2024-07-14 20:26:57.608646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.575 [2024-07-14 20:26:57.612180] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.575 [2024-07-14 20:26:57.612210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.575 [2024-07-14 20:26:57.612221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.575 [2024-07-14 20:26:57.615137] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.575 [2024-07-14 20:26:57.615167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.575 [2024-07-14 20:26:57.615179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.575 [2024-07-14 20:26:57.619646] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.575 [2024-07-14 20:26:57.619677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.575 [2024-07-14 20:26:57.619688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.575 [2024-07-14 20:26:57.623521] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.576 [2024-07-14 20:26:57.623551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.576 [2024-07-14 20:26:57.623562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.576 [2024-07-14 20:26:57.626991] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.576 [2024-07-14 20:26:57.627020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.576 [2024-07-14 20:26:57.627032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.576 [2024-07-14 20:26:57.630287] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.576 [2024-07-14 20:26:57.630317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.576 [2024-07-14 20:26:57.630328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.576 [2024-07-14 20:26:57.633678] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.576 [2024-07-14 20:26:57.633708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.576 [2024-07-14 20:26:57.633720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.576 [2024-07-14 20:26:57.636898] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.576 [2024-07-14 20:26:57.636928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.576 [2024-07-14 20:26:57.636940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.576 [2024-07-14 20:26:57.640495] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.576 [2024-07-14 20:26:57.640525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.576 [2024-07-14 20:26:57.640536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.576 [2024-07-14 20:26:57.643964] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.576 [2024-07-14 20:26:57.643993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.576 [2024-07-14 20:26:57.644004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.576 [2024-07-14 20:26:57.647610] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.576 [2024-07-14 20:26:57.647640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.576 [2024-07-14 20:26:57.647651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.576 [2024-07-14 20:26:57.651278] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.576 [2024-07-14 20:26:57.651309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.576 [2024-07-14 20:26:57.651321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.576 [2024-07-14 20:26:57.654881] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.576 [2024-07-14 20:26:57.654989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.576 [2024-07-14 20:26:57.655002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.838 [2024-07-14 20:26:57.657960] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.838 [2024-07-14 20:26:57.657990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.838 [2024-07-14 20:26:57.658016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.838 [2024-07-14 20:26:57.661611] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.838 [2024-07-14 20:26:57.661641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.838 [2024-07-14 20:26:57.661653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.838 [2024-07-14 20:26:57.665973] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.838 [2024-07-14 20:26:57.666005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.838 [2024-07-14 20:26:57.666017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.838 [2024-07-14 20:26:57.668728] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.838 [2024-07-14 20:26:57.668757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.838 [2024-07-14 20:26:57.668768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.838 [2024-07-14 20:26:57.672281] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.838 [2024-07-14 20:26:57.672311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.838 [2024-07-14 20:26:57.672323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.838 [2024-07-14 20:26:57.676730] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.838 [2024-07-14 20:26:57.676761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.838 [2024-07-14 20:26:57.676772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.838 [2024-07-14 20:26:57.680772] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.838 [2024-07-14 20:26:57.680803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.838 [2024-07-14 20:26:57.680814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.838 [2024-07-14 20:26:57.683537] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.838 [2024-07-14 20:26:57.683567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.838 [2024-07-14 20:26:57.683578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.838 [2024-07-14 20:26:57.686896] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.838 [2024-07-14 20:26:57.686965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.838 [2024-07-14 20:26:57.686977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.838 [2024-07-14 20:26:57.691156] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.838 [2024-07-14 20:26:57.691188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.838 [2024-07-14 20:26:57.691200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.838 [2024-07-14 20:26:57.694144] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.838 [2024-07-14 20:26:57.694173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.838 [2024-07-14 20:26:57.694185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.838 [2024-07-14 20:26:57.697923] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.838 [2024-07-14 20:26:57.697952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.838 [2024-07-14 20:26:57.697963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.838 [2024-07-14 20:26:57.701762] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.838 [2024-07-14 20:26:57.701792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.838 [2024-07-14 20:26:57.701804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.838 [2024-07-14 20:26:57.704502] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.838 [2024-07-14 20:26:57.704531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.838 [2024-07-14 20:26:57.704542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.838 [2024-07-14 20:26:57.708032] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.838 [2024-07-14 20:26:57.708061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.838 [2024-07-14 20:26:57.708072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.838 [2024-07-14 20:26:57.711703] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.838 [2024-07-14 20:26:57.711733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.838 [2024-07-14 20:26:57.711744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.838 [2024-07-14 20:26:57.714878] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.838 [2024-07-14 20:26:57.714950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.838 [2024-07-14 20:26:57.714974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.838 [2024-07-14 20:26:57.718077] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.838 [2024-07-14 20:26:57.718109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.838 [2024-07-14 20:26:57.718120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.838 [2024-07-14 20:26:57.721532] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.838 [2024-07-14 20:26:57.721562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.838 [2024-07-14 20:26:57.721574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.838 [2024-07-14 20:26:57.724978] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.838 [2024-07-14 20:26:57.725009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.838 [2024-07-14 20:26:57.725020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.838 [2024-07-14 20:26:57.728806] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.838 [2024-07-14 20:26:57.728835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.838 [2024-07-14 20:26:57.728847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.838 [2024-07-14 20:26:57.732793] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.838 [2024-07-14 20:26:57.732823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.838 [2024-07-14 20:26:57.732835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.838 [2024-07-14 20:26:57.736095] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.838 [2024-07-14 20:26:57.736125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.838 [2024-07-14 20:26:57.736137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.838 [2024-07-14 20:26:57.739657] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.838 [2024-07-14 20:26:57.739687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.838 [2024-07-14 20:26:57.739698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.838 [2024-07-14 20:26:57.743463] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.838 [2024-07-14 20:26:57.743494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.838 [2024-07-14 20:26:57.743507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.838 [2024-07-14 20:26:57.747033] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.838 [2024-07-14 20:26:57.747063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.838 [2024-07-14 20:26:57.747075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.839 [2024-07-14 20:26:57.750832] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.839 [2024-07-14 20:26:57.750872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.839 [2024-07-14 20:26:57.750885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.839 [2024-07-14 20:26:57.754689] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.839 [2024-07-14 20:26:57.754720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.839 [2024-07-14 20:26:57.754732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.839 [2024-07-14 20:26:57.758410] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.839 [2024-07-14 20:26:57.758441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.839 [2024-07-14 20:26:57.758453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.839 [2024-07-14 20:26:57.762190] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.839 [2024-07-14 20:26:57.762220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.839 [2024-07-14 20:26:57.762231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.839 [2024-07-14 20:26:57.765245] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.839 [2024-07-14 20:26:57.765275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.839 [2024-07-14 20:26:57.765287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.839 [2024-07-14 20:26:57.769121] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.839 [2024-07-14 20:26:57.769151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.839 [2024-07-14 20:26:57.769163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.839 [2024-07-14 20:26:57.773397] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.839 [2024-07-14 20:26:57.773427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.839 [2024-07-14 20:26:57.773438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.839 [2024-07-14 20:26:57.777079] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.839 [2024-07-14 20:26:57.777109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.839 [2024-07-14 20:26:57.777120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.839 [2024-07-14 20:26:57.779794] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.839 [2024-07-14 20:26:57.779823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.839 [2024-07-14 20:26:57.779835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.839 [2024-07-14 20:26:57.783433] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.839 [2024-07-14 20:26:57.783464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.839 [2024-07-14 20:26:57.783476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.839 [2024-07-14 20:26:57.787791] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.839 [2024-07-14 20:26:57.787821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.839 [2024-07-14 20:26:57.787832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.839 [2024-07-14 20:26:57.790694] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.839 [2024-07-14 20:26:57.790723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.839 [2024-07-14 20:26:57.790735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.839 [2024-07-14 20:26:57.794516] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.839 [2024-07-14 20:26:57.794546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.839 [2024-07-14 20:26:57.794557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.839 [2024-07-14 20:26:57.798608] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.839 [2024-07-14 20:26:57.798638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.839 [2024-07-14 20:26:57.798649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.839 [2024-07-14 20:26:57.801977] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.839 [2024-07-14 20:26:57.802006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.839 [2024-07-14 20:26:57.802018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.839 [2024-07-14 20:26:57.805194] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.839 [2024-07-14 20:26:57.805223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.839 [2024-07-14 20:26:57.805235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.839 [2024-07-14 20:26:57.808904] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.839 [2024-07-14 20:26:57.808932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.839 [2024-07-14 20:26:57.808943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.839 [2024-07-14 20:26:57.812362] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.839 [2024-07-14 20:26:57.812392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.839 [2024-07-14 20:26:57.812404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.839 [2024-07-14 20:26:57.815403] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.839 [2024-07-14 20:26:57.815433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.839 [2024-07-14 20:26:57.815444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.839 [2024-07-14 20:26:57.818116] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.839 [2024-07-14 20:26:57.818145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.839 [2024-07-14 20:26:57.818157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.839 [2024-07-14 20:26:57.821569] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.839 [2024-07-14 20:26:57.821599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.839 [2024-07-14 20:26:57.821610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.839 [2024-07-14 20:26:57.825253] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.839 [2024-07-14 20:26:57.825283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.839 [2024-07-14 20:26:57.825295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.839 [2024-07-14 20:26:57.828636] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.839 [2024-07-14 20:26:57.828666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.839 [2024-07-14 20:26:57.828677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.839 [2024-07-14 20:26:57.832458] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.839 [2024-07-14 20:26:57.832488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.839 [2024-07-14 20:26:57.832500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.839 [2024-07-14 20:26:57.835532] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.839 [2024-07-14 20:26:57.835561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.839 [2024-07-14 20:26:57.835573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.839 [2024-07-14 20:26:57.839157] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.839 [2024-07-14 20:26:57.839188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.839 [2024-07-14 20:26:57.839200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.839 [2024-07-14 20:26:57.843203] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.839 [2024-07-14 20:26:57.843234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.839 [2024-07-14 20:26:57.843246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.839 [2024-07-14 20:26:57.846093] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.839 [2024-07-14 20:26:57.846122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.839 [2024-07-14 20:26:57.846133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.839 [2024-07-14 20:26:57.849501] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.839 [2024-07-14 20:26:57.849531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.839 [2024-07-14 20:26:57.849542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.840 [2024-07-14 20:26:57.853209] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.840 [2024-07-14 20:26:57.853239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.840 [2024-07-14 20:26:57.853250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.840 [2024-07-14 20:26:57.855991] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.840 [2024-07-14 20:26:57.856019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.840 [2024-07-14 20:26:57.856031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.840 [2024-07-14 20:26:57.859523] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.840 [2024-07-14 20:26:57.859553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.840 [2024-07-14 20:26:57.859565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.840 [2024-07-14 20:26:57.863638] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.840 [2024-07-14 20:26:57.863669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.840 [2024-07-14 20:26:57.863680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.840 [2024-07-14 20:26:57.866460] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.840 [2024-07-14 20:26:57.866490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.840 [2024-07-14 20:26:57.866502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.840 [2024-07-14 20:26:57.870144] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.840 [2024-07-14 20:26:57.870173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.840 [2024-07-14 20:26:57.870185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.840 [2024-07-14 20:26:57.873841] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.840 [2024-07-14 20:26:57.873879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.840 [2024-07-14 20:26:57.873890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.840 [2024-07-14 20:26:57.877722] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.840 [2024-07-14 20:26:57.877753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.840 [2024-07-14 20:26:57.877765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.840 [2024-07-14 20:26:57.880647] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.840 [2024-07-14 20:26:57.880677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.840 [2024-07-14 20:26:57.880688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.840 [2024-07-14 20:26:57.884295] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.840 [2024-07-14 20:26:57.884327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.840 [2024-07-14 20:26:57.884342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.840 [2024-07-14 20:26:57.888164] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.840 [2024-07-14 20:26:57.888194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.840 [2024-07-14 20:26:57.888205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.840 [2024-07-14 20:26:57.892100] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.840 [2024-07-14 20:26:57.892130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.840 [2024-07-14 20:26:57.892141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.840 [2024-07-14 20:26:57.896321] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.840 [2024-07-14 20:26:57.896356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.840 [2024-07-14 20:26:57.896368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.840 [2024-07-14 20:26:57.898681] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.840 [2024-07-14 20:26:57.898709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.840 [2024-07-14 20:26:57.898724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.840 [2024-07-14 20:26:57.903095] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.840 [2024-07-14 20:26:57.903129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.840 [2024-07-14 20:26:57.903141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.840 [2024-07-14 20:26:57.905944] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.840 [2024-07-14 20:26:57.905974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.840 [2024-07-14 20:26:57.905987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.840 [2024-07-14 20:26:57.909641] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.840 [2024-07-14 20:26:57.909672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.840 [2024-07-14 20:26:57.909684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.840 [2024-07-14 20:26:57.914089] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.840 [2024-07-14 20:26:57.914119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.840 [2024-07-14 20:26:57.914132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.840 [2024-07-14 20:26:57.916998] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:08.840 [2024-07-14 20:26:57.917027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.840 [2024-07-14 20:26:57.917039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.102 [2024-07-14 20:26:57.921280] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.102 [2024-07-14 20:26:57.921327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.102 [2024-07-14 20:26:57.921339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.102 [2024-07-14 20:26:57.925445] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.102 [2024-07-14 20:26:57.925475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.102 [2024-07-14 20:26:57.925486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.102 [2024-07-14 20:26:57.928657] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.102 [2024-07-14 20:26:57.928686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.102 [2024-07-14 20:26:57.928698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.102 [2024-07-14 20:26:57.932562] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.102 [2024-07-14 20:26:57.932591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.102 [2024-07-14 20:26:57.932603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.102 [2024-07-14 20:26:57.936055] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.102 [2024-07-14 20:26:57.936085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.102 [2024-07-14 20:26:57.936096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.102 [2024-07-14 20:26:57.939102] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.102 [2024-07-14 20:26:57.939135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.102 [2024-07-14 20:26:57.939148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.102 [2024-07-14 20:26:57.941940] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.102 [2024-07-14 20:26:57.941968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.102 [2024-07-14 20:26:57.941979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.102 [2024-07-14 20:26:57.946118] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.102 [2024-07-14 20:26:57.946148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.102 [2024-07-14 20:26:57.946160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.102 [2024-07-14 20:26:57.948932] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.102 [2024-07-14 20:26:57.948960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.102 [2024-07-14 20:26:57.948971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.102 [2024-07-14 20:26:57.952432] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.102 [2024-07-14 20:26:57.952462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.102 [2024-07-14 20:26:57.952474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.102 [2024-07-14 20:26:57.956192] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.102 [2024-07-14 20:26:57.956223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.102 [2024-07-14 20:26:57.956235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.102 [2024-07-14 20:26:57.959605] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.102 [2024-07-14 20:26:57.959635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.102 [2024-07-14 20:26:57.959647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.102 [2024-07-14 20:26:57.963310] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.102 [2024-07-14 20:26:57.963356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.102 [2024-07-14 20:26:57.963384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.102 [2024-07-14 20:26:57.966779] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.102 [2024-07-14 20:26:57.966809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.102 [2024-07-14 20:26:57.966820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.102 [2024-07-14 20:26:57.970666] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.102 [2024-07-14 20:26:57.970697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.102 [2024-07-14 20:26:57.970709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.102 [2024-07-14 20:26:57.973343] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.102 [2024-07-14 20:26:57.973372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.102 [2024-07-14 20:26:57.973383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.102 [2024-07-14 20:26:57.976895] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.102 [2024-07-14 20:26:57.976923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.102 [2024-07-14 20:26:57.976934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.102 [2024-07-14 20:26:57.981081] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.102 [2024-07-14 20:26:57.981111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.102 [2024-07-14 20:26:57.981122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.102 [2024-07-14 20:26:57.984080] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.102 [2024-07-14 20:26:57.984110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.102 [2024-07-14 20:26:57.984121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.102 [2024-07-14 20:26:57.987757] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.102 [2024-07-14 20:26:57.987787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.102 [2024-07-14 20:26:57.987798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.102 [2024-07-14 20:26:57.991018] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.102 [2024-07-14 20:26:57.991048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.102 [2024-07-14 20:26:57.991060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.102 [2024-07-14 20:26:57.994125] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.102 [2024-07-14 20:26:57.994154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.102 [2024-07-14 20:26:57.994165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.102 [2024-07-14 20:26:57.997635] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.102 [2024-07-14 20:26:57.997666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.102 [2024-07-14 20:26:57.997677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.102 [2024-07-14 20:26:58.000921] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.102 [2024-07-14 20:26:58.000950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.102 [2024-07-14 20:26:58.000962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.102 [2024-07-14 20:26:58.004441] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.102 [2024-07-14 20:26:58.004471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.102 [2024-07-14 20:26:58.004483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.102 [2024-07-14 20:26:58.007973] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.102 [2024-07-14 20:26:58.008003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.102 [2024-07-14 20:26:58.008014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.103 [2024-07-14 20:26:58.011218] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.103 [2024-07-14 20:26:58.011249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.103 [2024-07-14 20:26:58.011261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.103 [2024-07-14 20:26:58.014955] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.103 [2024-07-14 20:26:58.014985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.103 [2024-07-14 20:26:58.014997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.103 [2024-07-14 20:26:58.018717] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.103 [2024-07-14 20:26:58.018747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.103 [2024-07-14 20:26:58.018759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.103 [2024-07-14 20:26:58.021571] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.103 [2024-07-14 20:26:58.021601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.103 [2024-07-14 20:26:58.021613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.103 [2024-07-14 20:26:58.024889] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.103 [2024-07-14 20:26:58.024918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.103 [2024-07-14 20:26:58.024930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.103 [2024-07-14 20:26:58.028114] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.103 [2024-07-14 20:26:58.028144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.103 [2024-07-14 20:26:58.028155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.103 [2024-07-14 20:26:58.031771] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.103 [2024-07-14 20:26:58.031801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.103 [2024-07-14 20:26:58.031812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.103 [2024-07-14 20:26:58.034972] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.103 [2024-07-14 20:26:58.034996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.103 [2024-07-14 20:26:58.035008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.103 [2024-07-14 20:26:58.038160] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.103 [2024-07-14 20:26:58.038189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.103 [2024-07-14 20:26:58.038201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.103 [2024-07-14 20:26:58.041805] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.103 [2024-07-14 20:26:58.041835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.103 [2024-07-14 20:26:58.041847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.103 [2024-07-14 20:26:58.045458] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.103 [2024-07-14 20:26:58.045489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.103 [2024-07-14 20:26:58.045500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.103 [2024-07-14 20:26:58.048405] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.103 [2024-07-14 20:26:58.048435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.103 [2024-07-14 20:26:58.048446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.103 [2024-07-14 20:26:58.051847] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.103 [2024-07-14 20:26:58.051888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.103 [2024-07-14 20:26:58.051900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.103 [2024-07-14 20:26:58.055948] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.103 [2024-07-14 20:26:58.055978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.103 [2024-07-14 20:26:58.055989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.103 [2024-07-14 20:26:58.059930] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.103 [2024-07-14 20:26:58.059960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.103 [2024-07-14 20:26:58.059971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.103 [2024-07-14 20:26:58.063611] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.103 [2024-07-14 20:26:58.063641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.103 [2024-07-14 20:26:58.063652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.103 [2024-07-14 20:26:58.065881] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.103 [2024-07-14 20:26:58.065908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.103 [2024-07-14 20:26:58.065919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.103 [2024-07-14 20:26:58.070173] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.103 [2024-07-14 20:26:58.070203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.103 [2024-07-14 20:26:58.070215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.103 [2024-07-14 20:26:58.074302] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.103 [2024-07-14 20:26:58.074333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.103 [2024-07-14 20:26:58.074345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.103 [2024-07-14 20:26:58.076697] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.103 [2024-07-14 20:26:58.076726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.103 [2024-07-14 20:26:58.076737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.103 [2024-07-14 20:26:58.080711] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.103 [2024-07-14 20:26:58.080742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.103 [2024-07-14 20:26:58.080753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.103 [2024-07-14 20:26:58.084003] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.103 [2024-07-14 20:26:58.084034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.103 [2024-07-14 20:26:58.084046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.103 [2024-07-14 20:26:58.086947] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.103 [2024-07-14 20:26:58.086976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.103 [2024-07-14 20:26:58.086988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.103 [2024-07-14 20:26:58.090481] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.103 [2024-07-14 20:26:58.090512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.103 [2024-07-14 20:26:58.090524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.103 [2024-07-14 20:26:58.094348] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.103 [2024-07-14 20:26:58.094379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.103 [2024-07-14 20:26:58.094390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.103 [2024-07-14 20:26:58.098264] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.103 [2024-07-14 20:26:58.098295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.103 [2024-07-14 20:26:58.098307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.103 [2024-07-14 20:26:58.101882] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.103 [2024-07-14 20:26:58.101912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.103 [2024-07-14 20:26:58.101923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.103 [2024-07-14 20:26:58.105519] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.103 [2024-07-14 20:26:58.105550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.103 [2024-07-14 20:26:58.105562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.103 [2024-07-14 20:26:58.109185] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.103 [2024-07-14 20:26:58.109215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.103 [2024-07-14 20:26:58.109227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.104 [2024-07-14 20:26:58.113131] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.104 [2024-07-14 20:26:58.113161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.104 [2024-07-14 20:26:58.113177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.104 [2024-07-14 20:26:58.116423] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.104 [2024-07-14 20:26:58.116453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.104 [2024-07-14 20:26:58.116465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.104 [2024-07-14 20:26:58.120368] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.104 [2024-07-14 20:26:58.120397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.104 [2024-07-14 20:26:58.120409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.104 [2024-07-14 20:26:58.123918] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.104 [2024-07-14 20:26:58.123946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.104 [2024-07-14 20:26:58.123958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.104 [2024-07-14 20:26:58.126538] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.104 [2024-07-14 20:26:58.126567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.104 [2024-07-14 20:26:58.126578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.104 [2024-07-14 20:26:58.130211] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.104 [2024-07-14 20:26:58.130242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.104 [2024-07-14 20:26:58.130254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.104 [2024-07-14 20:26:58.134220] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.104 [2024-07-14 20:26:58.134250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.104 [2024-07-14 20:26:58.134261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.104 [2024-07-14 20:26:58.138073] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.104 [2024-07-14 20:26:58.138102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.104 [2024-07-14 20:26:58.138114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.104 [2024-07-14 20:26:58.140947] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.104 [2024-07-14 20:26:58.140976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.104 [2024-07-14 20:26:58.140987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.104 [2024-07-14 20:26:58.144447] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.104 [2024-07-14 20:26:58.144477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.104 [2024-07-14 20:26:58.144488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.104 [2024-07-14 20:26:58.147552] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.104 [2024-07-14 20:26:58.147582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.104 [2024-07-14 20:26:58.147594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.104 [2024-07-14 20:26:58.150520] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.104 [2024-07-14 20:26:58.150551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.104 [2024-07-14 20:26:58.150562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.104 [2024-07-14 20:26:58.154178] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.104 [2024-07-14 20:26:58.154208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.104 [2024-07-14 20:26:58.154220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.104 [2024-07-14 20:26:58.157796] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.104 [2024-07-14 20:26:58.157826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.104 [2024-07-14 20:26:58.157838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.104 [2024-07-14 20:26:58.161075] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.104 [2024-07-14 20:26:58.161104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.104 [2024-07-14 20:26:58.161118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.104 [2024-07-14 20:26:58.164139] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.104 [2024-07-14 20:26:58.164173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.104 [2024-07-14 20:26:58.164195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.104 [2024-07-14 20:26:58.168128] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.104 [2024-07-14 20:26:58.168158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.104 [2024-07-14 20:26:58.168173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.104 [2024-07-14 20:26:58.171238] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.104 [2024-07-14 20:26:58.171286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.104 [2024-07-14 20:26:58.171298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.104 [2024-07-14 20:26:58.175105] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.104 [2024-07-14 20:26:58.175136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.104 [2024-07-14 20:26:58.175148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.104 [2024-07-14 20:26:58.178691] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.104 [2024-07-14 20:26:58.178722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.104 [2024-07-14 20:26:58.178737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.104 [2024-07-14 20:26:58.183388] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.104 [2024-07-14 20:26:58.183434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.104 [2024-07-14 20:26:58.183446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.365 [2024-07-14 20:26:58.186080] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.365 [2024-07-14 20:26:58.186127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.365 [2024-07-14 20:26:58.186139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.365 [2024-07-14 20:26:58.190389] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.365 [2024-07-14 20:26:58.190419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.365 [2024-07-14 20:26:58.190431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.365 [2024-07-14 20:26:58.194566] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.365 [2024-07-14 20:26:58.194596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.365 [2024-07-14 20:26:58.194608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.365 [2024-07-14 20:26:58.197546] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.365 [2024-07-14 20:26:58.197577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.365 [2024-07-14 20:26:58.197588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.365 [2024-07-14 20:26:58.201102] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.365 [2024-07-14 20:26:58.201132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.365 [2024-07-14 20:26:58.201143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.365 [2024-07-14 20:26:58.204542] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.365 [2024-07-14 20:26:58.204573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.365 [2024-07-14 20:26:58.204584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.365 [2024-07-14 20:26:58.208018] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.365 [2024-07-14 20:26:58.208049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.365 [2024-07-14 20:26:58.208060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.365 [2024-07-14 20:26:58.212057] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.365 [2024-07-14 20:26:58.212088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.365 [2024-07-14 20:26:58.212099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.365 [2024-07-14 20:26:58.214837] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.365 [2024-07-14 20:26:58.214875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.365 [2024-07-14 20:26:58.214887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.365 [2024-07-14 20:26:58.218388] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.365 [2024-07-14 20:26:58.218417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.365 [2024-07-14 20:26:58.218428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.365 [2024-07-14 20:26:58.221639] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.365 [2024-07-14 20:26:58.221669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.365 [2024-07-14 20:26:58.221680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.365 [2024-07-14 20:26:58.225587] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.365 [2024-07-14 20:26:58.225618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.365 [2024-07-14 20:26:58.225630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.365 [2024-07-14 20:26:58.228455] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.365 [2024-07-14 20:26:58.228485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.365 [2024-07-14 20:26:58.228496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.365 [2024-07-14 20:26:58.232277] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.365 [2024-07-14 20:26:58.232308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.365 [2024-07-14 20:26:58.232319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.365 [2024-07-14 20:26:58.235483] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.365 [2024-07-14 20:26:58.235512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.365 [2024-07-14 20:26:58.235524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.365 [2024-07-14 20:26:58.239035] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.365 [2024-07-14 20:26:58.239065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.365 [2024-07-14 20:26:58.239077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.365 [2024-07-14 20:26:58.242615] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.365 [2024-07-14 20:26:58.242645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.365 [2024-07-14 20:26:58.242656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.365 [2024-07-14 20:26:58.245616] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.365 [2024-07-14 20:26:58.245645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.366 [2024-07-14 20:26:58.245657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.366 [2024-07-14 20:26:58.248963] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.366 [2024-07-14 20:26:58.248991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.366 [2024-07-14 20:26:58.249002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.366 [2024-07-14 20:26:58.252237] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.366 [2024-07-14 20:26:58.252266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.366 [2024-07-14 20:26:58.252278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.366 [2024-07-14 20:26:58.256149] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.366 [2024-07-14 20:26:58.256179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.366 [2024-07-14 20:26:58.256191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.366 [2024-07-14 20:26:58.260207] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.366 [2024-07-14 20:26:58.260237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.366 [2024-07-14 20:26:58.260248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.366 [2024-07-14 20:26:58.262964] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.366 [2024-07-14 20:26:58.262994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.366 [2024-07-14 20:26:58.263006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.366 [2024-07-14 20:26:58.266623] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.366 [2024-07-14 20:26:58.266653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.366 [2024-07-14 20:26:58.266664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.366 [2024-07-14 20:26:58.271116] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.366 [2024-07-14 20:26:58.271147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.366 [2024-07-14 20:26:58.271159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.366 [2024-07-14 20:26:58.274567] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.366 [2024-07-14 20:26:58.274596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.366 [2024-07-14 20:26:58.274607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.366 [2024-07-14 20:26:58.277729] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.366 [2024-07-14 20:26:58.277759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.366 [2024-07-14 20:26:58.277770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.366 [2024-07-14 20:26:58.281542] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.366 [2024-07-14 20:26:58.281572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.366 [2024-07-14 20:26:58.281584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.366 [2024-07-14 20:26:58.284993] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.366 [2024-07-14 20:26:58.285022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.366 [2024-07-14 20:26:58.285034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.366 [2024-07-14 20:26:58.287937] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.366 [2024-07-14 20:26:58.287967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.366 [2024-07-14 20:26:58.287978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.366 [2024-07-14 20:26:58.291013] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.366 [2024-07-14 20:26:58.291044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.366 [2024-07-14 20:26:58.291056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.366 [2024-07-14 20:26:58.294654] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.366 [2024-07-14 20:26:58.294683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.366 [2024-07-14 20:26:58.294695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.366 [2024-07-14 20:26:58.297725] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.366 [2024-07-14 20:26:58.297755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.366 [2024-07-14 20:26:58.297767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.366 [2024-07-14 20:26:58.301366] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.366 [2024-07-14 20:26:58.301395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.366 [2024-07-14 20:26:58.301407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.366 [2024-07-14 20:26:58.305812] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.366 [2024-07-14 20:26:58.305843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.366 [2024-07-14 20:26:58.305865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.366 [2024-07-14 20:26:58.308821] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.366 [2024-07-14 20:26:58.308851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.366 [2024-07-14 20:26:58.308878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.366 [2024-07-14 20:26:58.312198] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.366 [2024-07-14 20:26:58.312227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.366 [2024-07-14 20:26:58.312239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.366 [2024-07-14 20:26:58.316197] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.366 [2024-07-14 20:26:58.316227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.366 [2024-07-14 20:26:58.316239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.366 [2024-07-14 20:26:58.320182] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.366 [2024-07-14 20:26:58.320212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.366 [2024-07-14 20:26:58.320223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.366 [2024-07-14 20:26:58.323902] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.366 [2024-07-14 20:26:58.323930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.366 [2024-07-14 20:26:58.323941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.367 [2024-07-14 20:26:58.327169] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.367 [2024-07-14 20:26:58.327199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.367 [2024-07-14 20:26:58.327211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.367 [2024-07-14 20:26:58.330079] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.367 [2024-07-14 20:26:58.330108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.367 [2024-07-14 20:26:58.330120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.367 [2024-07-14 20:26:58.334046] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.367 [2024-07-14 20:26:58.334076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.367 [2024-07-14 20:26:58.334087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.367 [2024-07-14 20:26:58.337537] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.367 [2024-07-14 20:26:58.337568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.367 [2024-07-14 20:26:58.337579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.367 [2024-07-14 20:26:58.341011] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.367 [2024-07-14 20:26:58.341041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.367 [2024-07-14 20:26:58.341052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.367 [2024-07-14 20:26:58.344091] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.367 [2024-07-14 20:26:58.344121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.367 [2024-07-14 20:26:58.344133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.367 [2024-07-14 20:26:58.347777] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.367 [2024-07-14 20:26:58.347806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.367 [2024-07-14 20:26:58.347818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.367 [2024-07-14 20:26:58.351609] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.367 [2024-07-14 20:26:58.351639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.367 [2024-07-14 20:26:58.351650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.367 [2024-07-14 20:26:58.355119] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.367 [2024-07-14 20:26:58.355151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.367 [2024-07-14 20:26:58.355162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.367 [2024-07-14 20:26:58.358289] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.367 [2024-07-14 20:26:58.358319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.367 [2024-07-14 20:26:58.358329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.367 [2024-07-14 20:26:58.361419] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.367 [2024-07-14 20:26:58.361450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.367 [2024-07-14 20:26:58.361466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.367 [2024-07-14 20:26:58.365504] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.367 [2024-07-14 20:26:58.365534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.367 [2024-07-14 20:26:58.365545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.367 [2024-07-14 20:26:58.369396] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.367 [2024-07-14 20:26:58.369427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.367 [2024-07-14 20:26:58.369438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.367 [2024-07-14 20:26:58.371694] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.367 [2024-07-14 20:26:58.371723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.367 [2024-07-14 20:26:58.371734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.367 [2024-07-14 20:26:58.375928] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.367 [2024-07-14 20:26:58.375957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.367 [2024-07-14 20:26:58.375968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.367 [2024-07-14 20:26:58.379285] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.367 [2024-07-14 20:26:58.379316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.367 [2024-07-14 20:26:58.379343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.367 [2024-07-14 20:26:58.382224] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.367 [2024-07-14 20:26:58.382253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.367 [2024-07-14 20:26:58.382264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.367 [2024-07-14 20:26:58.385383] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.367 [2024-07-14 20:26:58.385415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.367 [2024-07-14 20:26:58.385432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.367 [2024-07-14 20:26:58.389281] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.367 [2024-07-14 20:26:58.389311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.367 [2024-07-14 20:26:58.389328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.367 [2024-07-14 20:26:58.392565] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.367 [2024-07-14 20:26:58.392594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.367 [2024-07-14 20:26:58.392606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.367 [2024-07-14 20:26:58.396317] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.367 [2024-07-14 20:26:58.396348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.367 [2024-07-14 20:26:58.396360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.367 [2024-07-14 20:26:58.399483] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.367 [2024-07-14 20:26:58.399513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.368 [2024-07-14 20:26:58.399525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.368 [2024-07-14 20:26:58.402831] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.368 [2024-07-14 20:26:58.402871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.368 [2024-07-14 20:26:58.402884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.368 [2024-07-14 20:26:58.406693] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.368 [2024-07-14 20:26:58.406724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.368 [2024-07-14 20:26:58.406743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.368 [2024-07-14 20:26:58.410036] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.368 [2024-07-14 20:26:58.410067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.368 [2024-07-14 20:26:58.410079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.368 [2024-07-14 20:26:58.413959] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.368 [2024-07-14 20:26:58.413999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.368 [2024-07-14 20:26:58.414014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.368 [2024-07-14 20:26:58.417850] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.368 [2024-07-14 20:26:58.417925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.368 [2024-07-14 20:26:58.417937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.368 [2024-07-14 20:26:58.420868] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.368 [2024-07-14 20:26:58.420896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.368 [2024-07-14 20:26:58.420908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.368 [2024-07-14 20:26:58.424686] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.368 [2024-07-14 20:26:58.424716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.368 [2024-07-14 20:26:58.424727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.368 [2024-07-14 20:26:58.428623] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.368 [2024-07-14 20:26:58.428653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.368 [2024-07-14 20:26:58.428666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.368 [2024-07-14 20:26:58.431382] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.368 [2024-07-14 20:26:58.431411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.368 [2024-07-14 20:26:58.431428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.368 [2024-07-14 20:26:58.435942] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.368 [2024-07-14 20:26:58.435981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.368 [2024-07-14 20:26:58.435993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.368 [2024-07-14 20:26:58.440147] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.368 [2024-07-14 20:26:58.440178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.368 [2024-07-14 20:26:58.440190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.368 [2024-07-14 20:26:58.442662] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.368 [2024-07-14 20:26:58.442690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.368 [2024-07-14 20:26:58.442703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.368 [2024-07-14 20:26:58.446301] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.368 [2024-07-14 20:26:58.446334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.368 [2024-07-14 20:26:58.446347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.631 [2024-07-14 20:26:58.450656] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.631 [2024-07-14 20:26:58.450686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.631 [2024-07-14 20:26:58.450704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.631 [2024-07-14 20:26:58.453951] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.631 [2024-07-14 20:26:58.453981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.631 [2024-07-14 20:26:58.453993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.631 [2024-07-14 20:26:58.457714] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.631 [2024-07-14 20:26:58.457747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.631 [2024-07-14 20:26:58.457759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.631 [2024-07-14 20:26:58.461911] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.631 [2024-07-14 20:26:58.461935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.631 [2024-07-14 20:26:58.461946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.631 [2024-07-14 20:26:58.465166] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.631 [2024-07-14 20:26:58.465196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.631 [2024-07-14 20:26:58.465207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.631 [2024-07-14 20:26:58.469147] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.631 [2024-07-14 20:26:58.469177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.631 [2024-07-14 20:26:58.469189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.631 [2024-07-14 20:26:58.472550] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.631 [2024-07-14 20:26:58.472579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.631 [2024-07-14 20:26:58.472591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.631 [2024-07-14 20:26:58.476207] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.631 [2024-07-14 20:26:58.476236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.631 [2024-07-14 20:26:58.476247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.631 [2024-07-14 20:26:58.479708] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.631 [2024-07-14 20:26:58.479738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.631 [2024-07-14 20:26:58.479750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.631 [2024-07-14 20:26:58.482840] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.631 [2024-07-14 20:26:58.482879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.631 [2024-07-14 20:26:58.482891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.631 [2024-07-14 20:26:58.486781] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.631 [2024-07-14 20:26:58.486812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.631 [2024-07-14 20:26:58.486823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.631 [2024-07-14 20:26:58.489701] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.631 [2024-07-14 20:26:58.489733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.631 [2024-07-14 20:26:58.489744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.631 [2024-07-14 20:26:58.493449] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.631 [2024-07-14 20:26:58.493479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.631 [2024-07-14 20:26:58.493490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.631 [2024-07-14 20:26:58.496634] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.631 [2024-07-14 20:26:58.496664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.631 [2024-07-14 20:26:58.496676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.631 [2024-07-14 20:26:58.499829] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.631 [2024-07-14 20:26:58.499872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.631 [2024-07-14 20:26:58.499884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.631 [2024-07-14 20:26:58.503582] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.631 [2024-07-14 20:26:58.503612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.631 [2024-07-14 20:26:58.503624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.631 [2024-07-14 20:26:58.507199] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.631 [2024-07-14 20:26:58.507231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.631 [2024-07-14 20:26:58.507243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.631 [2024-07-14 20:26:58.510059] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.631 [2024-07-14 20:26:58.510089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.631 [2024-07-14 20:26:58.510100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.631 [2024-07-14 20:26:58.514153] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.631 [2024-07-14 20:26:58.514184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.631 [2024-07-14 20:26:58.514196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.631 [2024-07-14 20:26:58.517678] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.631 [2024-07-14 20:26:58.517708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.632 [2024-07-14 20:26:58.517720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.632 [2024-07-14 20:26:58.520968] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.632 [2024-07-14 20:26:58.520998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.632 [2024-07-14 20:26:58.521009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.632 [2024-07-14 20:26:58.524263] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.632 [2024-07-14 20:26:58.524292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.632 [2024-07-14 20:26:58.524304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.632 [2024-07-14 20:26:58.527800] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.632 [2024-07-14 20:26:58.527831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.632 [2024-07-14 20:26:58.527842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.632 [2024-07-14 20:26:58.530739] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.632 [2024-07-14 20:26:58.530769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.632 [2024-07-14 20:26:58.530780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.632 [2024-07-14 20:26:58.534780] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.632 [2024-07-14 20:26:58.534810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.632 [2024-07-14 20:26:58.534822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.632 [2024-07-14 20:26:58.538971] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.632 [2024-07-14 20:26:58.539001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.632 [2024-07-14 20:26:58.539013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.632 [2024-07-14 20:26:58.541443] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.632 [2024-07-14 20:26:58.541472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.632 [2024-07-14 20:26:58.541483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.632 [2024-07-14 20:26:58.545254] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.632 [2024-07-14 20:26:58.545285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.632 [2024-07-14 20:26:58.545296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.632 [2024-07-14 20:26:58.549135] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.632 [2024-07-14 20:26:58.549165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.632 [2024-07-14 20:26:58.549176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.632 [2024-07-14 20:26:58.552743] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.632 [2024-07-14 20:26:58.552773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.632 [2024-07-14 20:26:58.552790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.632 [2024-07-14 20:26:58.555329] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.632 [2024-07-14 20:26:58.555374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.632 [2024-07-14 20:26:58.555385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.632 [2024-07-14 20:26:58.558563] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.632 [2024-07-14 20:26:58.558592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.632 [2024-07-14 20:26:58.558605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.632 [2024-07-14 20:26:58.561848] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.632 [2024-07-14 20:26:58.561887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.632 [2024-07-14 20:26:58.561899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.632 [2024-07-14 20:26:58.565555] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.632 [2024-07-14 20:26:58.565585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.632 [2024-07-14 20:26:58.565597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.632 [2024-07-14 20:26:58.569247] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.632 [2024-07-14 20:26:58.569277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.632 [2024-07-14 20:26:58.569289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.632 [2024-07-14 20:26:58.572537] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.632 [2024-07-14 20:26:58.572567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.632 [2024-07-14 20:26:58.572578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.632 [2024-07-14 20:26:58.576532] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.632 [2024-07-14 20:26:58.576562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.632 [2024-07-14 20:26:58.576578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.632 [2024-07-14 20:26:58.579962] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.632 [2024-07-14 20:26:58.579992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.632 [2024-07-14 20:26:58.580004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.632 [2024-07-14 20:26:58.583121] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.632 [2024-07-14 20:26:58.583152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.632 [2024-07-14 20:26:58.583164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.632 [2024-07-14 20:26:58.586897] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.632 [2024-07-14 20:26:58.586951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.632 [2024-07-14 20:26:58.586963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.632 [2024-07-14 20:26:58.589997] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.632 [2024-07-14 20:26:58.590028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.632 [2024-07-14 20:26:58.590039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.632 [2024-07-14 20:26:58.593305] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.632 [2024-07-14 20:26:58.593335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.632 [2024-07-14 20:26:58.593347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.632 [2024-07-14 20:26:58.596808] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.632 [2024-07-14 20:26:58.596838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.632 [2024-07-14 20:26:58.596849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.632 [2024-07-14 20:26:58.600536] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.632 [2024-07-14 20:26:58.600565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.632 [2024-07-14 20:26:58.600577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.632 [2024-07-14 20:26:58.604118] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.632 [2024-07-14 20:26:58.604147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.632 [2024-07-14 20:26:58.604158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.632 [2024-07-14 20:26:58.607501] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.632 [2024-07-14 20:26:58.607531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.632 [2024-07-14 20:26:58.607542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.632 [2024-07-14 20:26:58.610624] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.632 [2024-07-14 20:26:58.610654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.632 [2024-07-14 20:26:58.610665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.632 [2024-07-14 20:26:58.614225] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.632 [2024-07-14 20:26:58.614256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.632 [2024-07-14 20:26:58.614283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.632 [2024-07-14 20:26:58.617850] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.633 [2024-07-14 20:26:58.617890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.633 [2024-07-14 20:26:58.617902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.633 [2024-07-14 20:26:58.620821] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.633 [2024-07-14 20:26:58.620851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.633 [2024-07-14 20:26:58.620873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.633 [2024-07-14 20:26:58.624627] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.633 [2024-07-14 20:26:58.624658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.633 [2024-07-14 20:26:58.624669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.633 [2024-07-14 20:26:58.628336] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.633 [2024-07-14 20:26:58.628366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.633 [2024-07-14 20:26:58.628377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.633 [2024-07-14 20:26:58.630960] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.633 [2024-07-14 20:26:58.630989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.633 [2024-07-14 20:26:58.631000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.633 [2024-07-14 20:26:58.635090] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.633 [2024-07-14 20:26:58.635122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.633 [2024-07-14 20:26:58.635134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.633 [2024-07-14 20:26:58.638680] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.633 [2024-07-14 20:26:58.638710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.633 [2024-07-14 20:26:58.638721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.633 [2024-07-14 20:26:58.641377] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.633 [2024-07-14 20:26:58.641406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.633 [2024-07-14 20:26:58.641417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.633 [2024-07-14 20:26:58.644932] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.633 [2024-07-14 20:26:58.644961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.633 [2024-07-14 20:26:58.644973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.633 [2024-07-14 20:26:58.649277] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.633 [2024-07-14 20:26:58.649307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.633 [2024-07-14 20:26:58.649318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.633 [2024-07-14 20:26:58.652316] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.633 [2024-07-14 20:26:58.652344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.633 [2024-07-14 20:26:58.652356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.633 [2024-07-14 20:26:58.655872] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.633 [2024-07-14 20:26:58.655912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.633 [2024-07-14 20:26:58.655925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.633 [2024-07-14 20:26:58.659168] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fad330) 00:27:09.633 [2024-07-14 20:26:58.659200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.633 [2024-07-14 20:26:58.659212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.633 00:27:09.633 Latency(us) 00:27:09.633 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:09.633 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:27:09.633 nvme0n1 : 2.00 8850.38 1106.30 0.00 0.00 1804.26 547.37 4676.89 00:27:09.633 =================================================================================================================== 00:27:09.633 Total : 8850.38 1106.30 0.00 0.00 1804.26 547.37 4676.89 00:27:09.633 0 00:27:09.633 20:26:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:09.633 20:26:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:09.633 20:26:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:09.633 20:26:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:09.633 | .driver_specific 00:27:09.633 | .nvme_error 00:27:09.633 | .status_code 00:27:09.633 | .command_transient_transport_error' 00:27:09.892 20:26:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 571 > 0 )) 00:27:09.892 20:26:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 112265 00:27:09.892 20:26:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 112265 ']' 00:27:09.892 20:26:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 112265 00:27:09.892 20:26:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:27:09.892 20:26:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:09.892 20:26:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 112265 00:27:09.892 20:26:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:09.892 20:26:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:09.892 killing process with pid 112265 00:27:09.892 20:26:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 112265' 00:27:09.892 20:26:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 112265 00:27:09.892 Received shutdown signal, test time was about 2.000000 seconds 00:27:09.892 00:27:09.892 Latency(us) 00:27:09.892 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:09.892 =================================================================================================================== 00:27:09.892 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:09.892 20:26:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 112265 00:27:10.460 20:26:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:27:10.460 20:26:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:10.460 20:26:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:27:10.460 20:26:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:27:10.460 20:26:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:27:10.460 20:26:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=112350 00:27:10.460 20:26:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 112350 /var/tmp/bperf.sock 00:27:10.460 20:26:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:27:10.460 20:26:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 112350 ']' 00:27:10.460 20:26:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:10.460 20:26:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:10.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:10.460 20:26:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:10.460 20:26:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:10.460 20:26:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:10.460 [2024-07-14 20:26:59.303076] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:27:10.460 [2024-07-14 20:26:59.303204] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112350 ] 00:27:10.460 [2024-07-14 20:26:59.444718] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:10.719 [2024-07-14 20:26:59.566641] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:11.287 20:27:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:11.287 20:27:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:27:11.287 20:27:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:11.287 20:27:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:11.546 20:27:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:11.546 20:27:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.546 20:27:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:11.546 20:27:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.546 20:27:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:11.546 20:27:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:11.805 nvme0n1 00:27:11.805 20:27:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:11.805 20:27:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.805 20:27:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:11.805 20:27:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.805 20:27:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:11.805 20:27:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:12.064 Running I/O for 2 seconds... 00:27:12.064 [2024-07-14 20:27:00.961394] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190f6458 00:27:12.064 [2024-07-14 20:27:00.962389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.064 [2024-07-14 20:27:00.962428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:12.064 [2024-07-14 20:27:00.972273] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190f4f40 00:27:12.064 [2024-07-14 20:27:00.973229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:17224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.064 [2024-07-14 20:27:00.973257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:12.064 [2024-07-14 20:27:00.982210] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190ef6a8 00:27:12.064 [2024-07-14 20:27:00.983057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:15122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.064 [2024-07-14 20:27:00.983086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:12.064 [2024-07-14 20:27:00.994403] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190e27f0 00:27:12.064 [2024-07-14 20:27:00.995841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:16895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.064 [2024-07-14 20:27:00.995876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:12.064 [2024-07-14 20:27:01.004764] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190f3e60 00:27:12.064 [2024-07-14 20:27:01.006086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:5961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.064 [2024-07-14 20:27:01.006113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:12.064 [2024-07-14 20:27:01.014009] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190fd640 00:27:12.064 [2024-07-14 20:27:01.015081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:19853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.064 [2024-07-14 20:27:01.015110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:12.065 [2024-07-14 20:27:01.023914] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190f9f68 00:27:12.065 [2024-07-14 20:27:01.024798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:21081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.065 [2024-07-14 20:27:01.024824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:12.065 [2024-07-14 20:27:01.033117] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190e95a0 00:27:12.065 [2024-07-14 20:27:01.033874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:15640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.065 [2024-07-14 20:27:01.033901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:12.065 [2024-07-14 20:27:01.042583] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190f2948 00:27:12.065 [2024-07-14 20:27:01.043264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.065 [2024-07-14 20:27:01.043308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:12.065 [2024-07-14 20:27:01.054383] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190fac10 00:27:12.065 [2024-07-14 20:27:01.055620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:12923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.065 [2024-07-14 20:27:01.055646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:12.065 [2024-07-14 20:27:01.064148] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190df118 00:27:12.065 [2024-07-14 20:27:01.065142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:10522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.065 [2024-07-14 20:27:01.065169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:12.065 [2024-07-14 20:27:01.073906] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190ebb98 00:27:12.065 [2024-07-14 20:27:01.074908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.065 [2024-07-14 20:27:01.074970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:12.065 [2024-07-14 20:27:01.085077] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190ec840 00:27:12.065 [2024-07-14 20:27:01.085987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.065 [2024-07-14 20:27:01.086016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:12.065 [2024-07-14 20:27:01.096553] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190de8a8 00:27:12.065 [2024-07-14 20:27:01.097874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:1777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.065 [2024-07-14 20:27:01.097900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:12.065 [2024-07-14 20:27:01.105875] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190e6738 00:27:12.065 [2024-07-14 20:27:01.107017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:3455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.065 [2024-07-14 20:27:01.107047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:12.065 [2024-07-14 20:27:01.115673] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190f46d0 00:27:12.065 [2024-07-14 20:27:01.116614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:3198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.065 [2024-07-14 20:27:01.116640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:12.065 [2024-07-14 20:27:01.125544] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190e1f80 00:27:12.065 [2024-07-14 20:27:01.126621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:1523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.065 [2024-07-14 20:27:01.126648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:12.065 [2024-07-14 20:27:01.136076] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190f5378 00:27:12.065 [2024-07-14 20:27:01.137275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:16699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.065 [2024-07-14 20:27:01.137305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:12.065 [2024-07-14 20:27:01.146517] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190f4298 00:27:12.065 [2024-07-14 20:27:01.147679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:24225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.065 [2024-07-14 20:27:01.147707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:12.324 [2024-07-14 20:27:01.157557] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190f0ff8 00:27:12.325 [2024-07-14 20:27:01.158780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:9957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.325 [2024-07-14 20:27:01.158805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:12.325 [2024-07-14 20:27:01.166955] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190f6458 00:27:12.325 [2024-07-14 20:27:01.168063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:19565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.325 [2024-07-14 20:27:01.168089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:12.325 [2024-07-14 20:27:01.176881] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190e99d8 00:27:12.325 [2024-07-14 20:27:01.177739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:3862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.325 [2024-07-14 20:27:01.177770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:12.325 [2024-07-14 20:27:01.187918] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190e73e0 00:27:12.325 [2024-07-14 20:27:01.189135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:5770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.325 [2024-07-14 20:27:01.189161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:12.325 [2024-07-14 20:27:01.197595] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190e49b0 00:27:12.325 [2024-07-14 20:27:01.198804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:9704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.325 [2024-07-14 20:27:01.198829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:12.325 [2024-07-14 20:27:01.207650] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190f3a28 00:27:12.325 [2024-07-14 20:27:01.208474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:13995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.325 [2024-07-14 20:27:01.208500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.325 [2024-07-14 20:27:01.217272] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190ef6a8 00:27:12.325 [2024-07-14 20:27:01.218419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.325 [2024-07-14 20:27:01.218445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:12.325 [2024-07-14 20:27:01.227036] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190feb58 00:27:12.325 [2024-07-14 20:27:01.228218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.325 [2024-07-14 20:27:01.228245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:12.325 [2024-07-14 20:27:01.237380] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190f2948 00:27:12.325 [2024-07-14 20:27:01.238618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:5546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.325 [2024-07-14 20:27:01.238643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:12.325 [2024-07-14 20:27:01.246197] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190ddc00 00:27:12.325 [2024-07-14 20:27:01.247806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:3681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.325 [2024-07-14 20:27:01.247833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:12.325 [2024-07-14 20:27:01.257238] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190e49b0 00:27:12.325 [2024-07-14 20:27:01.258355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:23976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.325 [2024-07-14 20:27:01.258380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:12.325 [2024-07-14 20:27:01.266391] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190ec840 00:27:12.325 [2024-07-14 20:27:01.267431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:6533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.325 [2024-07-14 20:27:01.267457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:12.325 [2024-07-14 20:27:01.276672] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190e4578 00:27:12.325 [2024-07-14 20:27:01.277395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.325 [2024-07-14 20:27:01.277422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:12.325 [2024-07-14 20:27:01.286094] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190ebfd0 00:27:12.325 [2024-07-14 20:27:01.286692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:12079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.325 [2024-07-14 20:27:01.286719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:12.325 [2024-07-14 20:27:01.295546] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190e8088 00:27:12.325 [2024-07-14 20:27:01.296013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:24841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.325 [2024-07-14 20:27:01.296037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:12.325 [2024-07-14 20:27:01.307820] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190fc560 00:27:12.325 [2024-07-14 20:27:01.309390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:23898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.325 [2024-07-14 20:27:01.309416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:12.325 [2024-07-14 20:27:01.314868] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190eb328 00:27:12.325 [2024-07-14 20:27:01.315716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:12853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.325 [2024-07-14 20:27:01.315742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:12.325 [2024-07-14 20:27:01.325368] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190e1710 00:27:12.325 [2024-07-14 20:27:01.326242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:20646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.325 [2024-07-14 20:27:01.326268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:12.325 [2024-07-14 20:27:01.335543] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190e0630 00:27:12.325 [2024-07-14 20:27:01.336450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:17746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.325 [2024-07-14 20:27:01.336481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:12.325 [2024-07-14 20:27:01.346717] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190f1430 00:27:12.325 [2024-07-14 20:27:01.348050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:22909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.325 [2024-07-14 20:27:01.348076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:12.325 [2024-07-14 20:27:01.355558] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190ea680 00:27:12.325 [2024-07-14 20:27:01.357141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:12151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.325 [2024-07-14 20:27:01.357167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:12.325 [2024-07-14 20:27:01.364181] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190df550 00:27:12.325 [2024-07-14 20:27:01.364788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.325 [2024-07-14 20:27:01.364814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:12.325 [2024-07-14 20:27:01.376238] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190e3d08 00:27:12.325 [2024-07-14 20:27:01.377272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:17265 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.325 [2024-07-14 20:27:01.377298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:12.325 [2024-07-14 20:27:01.387219] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190ddc00 00:27:12.325 [2024-07-14 20:27:01.388750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:24741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.325 [2024-07-14 20:27:01.388777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:12.325 [2024-07-14 20:27:01.394410] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190edd58 00:27:12.325 [2024-07-14 20:27:01.395099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.325 [2024-07-14 20:27:01.395127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:12.325 [2024-07-14 20:27:01.406885] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190f46d0 00:27:12.325 [2024-07-14 20:27:01.408286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.325 [2024-07-14 20:27:01.408313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:12.585 [2024-07-14 20:27:01.417947] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190ea248 00:27:12.585 [2024-07-14 20:27:01.419167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:1986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.585 [2024-07-14 20:27:01.419198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:12.585 [2024-07-14 20:27:01.427677] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190df550 00:27:12.585 [2024-07-14 20:27:01.428728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:15485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.585 [2024-07-14 20:27:01.428755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:12.585 [2024-07-14 20:27:01.437195] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190fa7d8 00:27:12.585 [2024-07-14 20:27:01.438161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:21366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.585 [2024-07-14 20:27:01.438188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:12.585 [2024-07-14 20:27:01.447598] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190fbcf0 00:27:12.585 [2024-07-14 20:27:01.448514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:4551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.585 [2024-07-14 20:27:01.448542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:12.585 [2024-07-14 20:27:01.457777] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190e6b70 00:27:12.585 [2024-07-14 20:27:01.458293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:16404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.585 [2024-07-14 20:27:01.458318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:12.585 [2024-07-14 20:27:01.469588] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190f4298 00:27:12.585 [2024-07-14 20:27:01.470766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:16286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.585 [2024-07-14 20:27:01.470793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:12.585 [2024-07-14 20:27:01.478457] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190e8d30 00:27:12.585 [2024-07-14 20:27:01.480104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:19275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.585 [2024-07-14 20:27:01.480131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:12.585 [2024-07-14 20:27:01.487085] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190e7818 00:27:12.585 [2024-07-14 20:27:01.487746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:2645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.585 [2024-07-14 20:27:01.487772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:12.585 [2024-07-14 20:27:01.497490] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190e6b70 00:27:12.585 [2024-07-14 20:27:01.498183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:8610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.585 [2024-07-14 20:27:01.498210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:12.585 [2024-07-14 20:27:01.508772] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190f20d8 00:27:12.585 [2024-07-14 20:27:01.509861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:24465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.585 [2024-07-14 20:27:01.509899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:12.585 [2024-07-14 20:27:01.520774] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190e9e10 00:27:12.585 [2024-07-14 20:27:01.522438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:14631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.585 [2024-07-14 20:27:01.522466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.585 [2024-07-14 20:27:01.527868] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190f9b30 00:27:12.585 [2024-07-14 20:27:01.528545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:2676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.585 [2024-07-14 20:27:01.528571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:12.585 [2024-07-14 20:27:01.540459] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190f81e0 00:27:12.585 [2024-07-14 20:27:01.541959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:14109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.585 [2024-07-14 20:27:01.541984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:12.585 [2024-07-14 20:27:01.547753] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190f1430 00:27:12.585 [2024-07-14 20:27:01.548566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:20177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.585 [2024-07-14 20:27:01.548592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:12.585 [2024-07-14 20:27:01.557787] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190e73e0 00:27:12.585 [2024-07-14 20:27:01.558598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:1199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.585 [2024-07-14 20:27:01.558624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:12.585 [2024-07-14 20:27:01.569502] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190e49b0 00:27:12.585 [2024-07-14 20:27:01.570836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.585 [2024-07-14 20:27:01.570873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:12.585 [2024-07-14 20:27:01.578336] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190fdeb0 00:27:12.585 [2024-07-14 20:27:01.580017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:25248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.585 [2024-07-14 20:27:01.580043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:12.585 [2024-07-14 20:27:01.588994] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190e9168 00:27:12.586 [2024-07-14 20:27:01.589859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:7878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.586 [2024-07-14 20:27:01.589895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:12.586 [2024-07-14 20:27:01.598416] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190f3e60 00:27:12.586 [2024-07-14 20:27:01.599141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:21497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.586 [2024-07-14 20:27:01.599169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:12.586 [2024-07-14 20:27:01.608292] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190f7da8 00:27:12.586 [2024-07-14 20:27:01.609251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.586 [2024-07-14 20:27:01.609277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:12.586 [2024-07-14 20:27:01.617472] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190f6020 00:27:12.586 [2024-07-14 20:27:01.618319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.586 [2024-07-14 20:27:01.618345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:12.586 [2024-07-14 20:27:01.627907] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190ebb98 00:27:12.586 [2024-07-14 20:27:01.628912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:24010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.586 [2024-07-14 20:27:01.628939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:12.586 [2024-07-14 20:27:01.637591] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190e2c28 00:27:12.586 [2024-07-14 20:27:01.638416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.586 [2024-07-14 20:27:01.638442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:12.586 [2024-07-14 20:27:01.647061] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190f0350 00:27:12.586 [2024-07-14 20:27:01.647799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:5082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.586 [2024-07-14 20:27:01.647825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:12.586 [2024-07-14 20:27:01.659881] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190feb58 00:27:12.586 [2024-07-14 20:27:01.661504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:16742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.586 [2024-07-14 20:27:01.661530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.586 [2024-07-14 20:27:01.667283] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190f8618 00:27:12.586 [2024-07-14 20:27:01.668276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.586 [2024-07-14 20:27:01.668302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.847 [2024-07-14 20:27:01.680342] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190ea248 00:27:12.847 [2024-07-14 20:27:01.681786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:23125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.847 [2024-07-14 20:27:01.681812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:12.847 [2024-07-14 20:27:01.690556] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190e2c28 00:27:12.847 [2024-07-14 20:27:01.692072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:14222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.847 [2024-07-14 20:27:01.692097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:12.847 [2024-07-14 20:27:01.697479] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190e1b48 00:27:12.847 [2024-07-14 20:27:01.698100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:14357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.847 [2024-07-14 20:27:01.698126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:12.847 [2024-07-14 20:27:01.707615] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190f81e0 00:27:12.847 [2024-07-14 20:27:01.708202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:25366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.847 [2024-07-14 20:27:01.708229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:12.847 [2024-07-14 20:27:01.718898] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190ff3c8 00:27:12.847 [2024-07-14 20:27:01.720184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:9157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.847 [2024-07-14 20:27:01.720213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:12.847 [2024-07-14 20:27:01.728889] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190df550 00:27:12.847 [2024-07-14 20:27:01.729958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:19498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.847 [2024-07-14 20:27:01.729984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:12.847 [2024-07-14 20:27:01.739049] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190f7538 00:27:12.847 [2024-07-14 20:27:01.739778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.847 [2024-07-14 20:27:01.739806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:12.847 [2024-07-14 20:27:01.749492] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190e73e0 00:27:12.847 [2024-07-14 20:27:01.750355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:20673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.847 [2024-07-14 20:27:01.750382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:12.847 [2024-07-14 20:27:01.759065] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190f6890 00:27:12.847 [2024-07-14 20:27:01.759984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:3500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.847 [2024-07-14 20:27:01.760011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:12.847 [2024-07-14 20:27:01.771220] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190e3d08 00:27:12.847 [2024-07-14 20:27:01.772719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:18578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.847 [2024-07-14 20:27:01.772746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:12.847 [2024-07-14 20:27:01.778458] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190f2510 00:27:12.847 [2024-07-14 20:27:01.779137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:15707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.847 [2024-07-14 20:27:01.779175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:12.847 [2024-07-14 20:27:01.790665] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190f2d80 00:27:12.847 [2024-07-14 20:27:01.791967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:10305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.847 [2024-07-14 20:27:01.791994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:12.847 [2024-07-14 20:27:01.801079] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190e4de8 00:27:12.847 [2024-07-14 20:27:01.802230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:12367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.847 [2024-07-14 20:27:01.802256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:12.847 [2024-07-14 20:27:01.812508] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190fcdd0 00:27:12.847 [2024-07-14 20:27:01.814134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.847 [2024-07-14 20:27:01.814160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:12.847 [2024-07-14 20:27:01.819790] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190ec840 00:27:12.847 [2024-07-14 20:27:01.820437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.847 [2024-07-14 20:27:01.820464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:12.847 [2024-07-14 20:27:01.831222] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190e38d0 00:27:12.847 [2024-07-14 20:27:01.832089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.847 [2024-07-14 20:27:01.832117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:12.847 [2024-07-14 20:27:01.841475] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190ec408 00:27:12.847 [2024-07-14 20:27:01.842523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:14949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.847 [2024-07-14 20:27:01.842549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:12.847 [2024-07-14 20:27:01.850946] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190f3e60 00:27:12.847 [2024-07-14 20:27:01.851935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:10740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.847 [2024-07-14 20:27:01.851961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:12.847 [2024-07-14 20:27:01.860894] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190e99d8 00:27:12.847 [2024-07-14 20:27:01.861923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:15288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.847 [2024-07-14 20:27:01.861948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:12.847 [2024-07-14 20:27:01.872857] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190e5ec8 00:27:12.847 [2024-07-14 20:27:01.874495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:7638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.847 [2024-07-14 20:27:01.874522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:12.847 [2024-07-14 20:27:01.880024] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190e4140 00:27:12.847 [2024-07-14 20:27:01.880674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:11026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.848 [2024-07-14 20:27:01.880700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:12.848 [2024-07-14 20:27:01.893484] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190fcdd0 00:27:12.848 [2024-07-14 20:27:01.895003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:10340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.848 [2024-07-14 20:27:01.895030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:12.848 [2024-07-14 20:27:01.900346] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190df550 00:27:12.848 [2024-07-14 20:27:01.901006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:2087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.848 [2024-07-14 20:27:01.901033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:12.848 [2024-07-14 20:27:01.910499] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190e6300 00:27:12.848 [2024-07-14 20:27:01.911171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:17102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.848 [2024-07-14 20:27:01.911198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:12.848 [2024-07-14 20:27:01.922233] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190f96f8 00:27:12.848 [2024-07-14 20:27:01.923512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:24558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.848 [2024-07-14 20:27:01.923537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:13.167 [2024-07-14 20:27:01.932629] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190e38d0 00:27:13.167 [2024-07-14 20:27:01.933754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:23940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.167 [2024-07-14 20:27:01.933784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:13.167 [2024-07-14 20:27:01.944086] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190e8088 00:27:13.167 [2024-07-14 20:27:01.945316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:15960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.167 [2024-07-14 20:27:01.945346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:13.167 [2024-07-14 20:27:01.955751] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190fc560 00:27:13.167 [2024-07-14 20:27:01.956944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:14177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.167 [2024-07-14 20:27:01.956970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:13.167 [2024-07-14 20:27:01.968366] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190f46d0 00:27:13.167 [2024-07-14 20:27:01.969993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:24317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.167 [2024-07-14 20:27:01.970019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.167 [2024-07-14 20:27:01.975643] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190ddc00 00:27:13.167 [2024-07-14 20:27:01.976542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:16357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.167 [2024-07-14 20:27:01.976567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.167 [2024-07-14 20:27:01.988221] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190e9e10 00:27:13.167 [2024-07-14 20:27:01.989492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.167 [2024-07-14 20:27:01.989518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.167 [2024-07-14 20:27:01.996417] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190ebfd0 00:27:13.167 [2024-07-14 20:27:01.997131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:15450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.167 [2024-07-14 20:27:01.997157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.167 [2024-07-14 20:27:02.009195] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190f2510 00:27:13.167 [2024-07-14 20:27:02.010726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.168 [2024-07-14 20:27:02.010752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.168 [2024-07-14 20:27:02.016291] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190f31b8 00:27:13.168 [2024-07-14 20:27:02.017043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:5130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.168 [2024-07-14 20:27:02.017068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:13.168 [2024-07-14 20:27:02.028692] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190df118 00:27:13.168 [2024-07-14 20:27:02.029974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:20327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.168 [2024-07-14 20:27:02.029999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:13.168 [2024-07-14 20:27:02.038355] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190e3498 00:27:13.168 [2024-07-14 20:27:02.039514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:17263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.168 [2024-07-14 20:27:02.039540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:13.168 [2024-07-14 20:27:02.048345] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190eea00 00:27:13.168 [2024-07-14 20:27:02.049378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:7672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.168 [2024-07-14 20:27:02.049404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:13.168 [2024-07-14 20:27:02.058900] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190ea680 00:27:13.168 [2024-07-14 20:27:02.060164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:23140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.168 [2024-07-14 20:27:02.060190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:13.168 [2024-07-14 20:27:02.069242] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190fbcf0 00:27:13.168 [2024-07-14 20:27:02.070009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:19856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.168 [2024-07-14 20:27:02.070035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:13.168 [2024-07-14 20:27:02.078653] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190fcdd0 00:27:13.168 [2024-07-14 20:27:02.080357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:19788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.168 [2024-07-14 20:27:02.080384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:13.168 [2024-07-14 20:27:02.089714] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190e3498 00:27:13.168 [2024-07-14 20:27:02.090531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:7712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.168 [2024-07-14 20:27:02.090558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:13.168 [2024-07-14 20:27:02.098976] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190e12d8 00:27:13.168 [2024-07-14 20:27:02.100123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:13651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.168 [2024-07-14 20:27:02.100149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:13.168 [2024-07-14 20:27:02.109585] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190f35f0 00:27:13.168 [2024-07-14 20:27:02.110590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:15378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.168 [2024-07-14 20:27:02.110617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:13.168 [2024-07-14 20:27:02.121132] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190e9e10 00:27:13.168 [2024-07-14 20:27:02.122251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:19340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.168 [2024-07-14 20:27:02.122277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:13.168 [2024-07-14 20:27:02.132076] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190edd58 00:27:13.168 [2024-07-14 20:27:02.133133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:12530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.168 [2024-07-14 20:27:02.133159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:13.168 [2024-07-14 20:27:02.142187] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190efae0 00:27:13.168 [2024-07-14 20:27:02.143178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:13099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.168 [2024-07-14 20:27:02.143208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:13.168 [2024-07-14 20:27:02.155257] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190ddc00 00:27:13.168 [2024-07-14 20:27:02.156925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:9001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.168 [2024-07-14 20:27:02.156968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:13.168 [2024-07-14 20:27:02.162833] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190fd208 00:27:13.168 [2024-07-14 20:27:02.163646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:5813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.168 [2024-07-14 20:27:02.163673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:13.168 [2024-07-14 20:27:02.174995] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190edd58 00:27:13.168 [2024-07-14 20:27:02.176231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:3234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.168 [2024-07-14 20:27:02.176257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:13.168 [2024-07-14 20:27:02.184941] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190f3e60 00:27:13.168 [2024-07-14 20:27:02.185948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:11309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.168 [2024-07-14 20:27:02.185974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:13.168 [2024-07-14 20:27:02.195673] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190fc560 00:27:13.168 [2024-07-14 20:27:02.196811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.168 [2024-07-14 20:27:02.196840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:13.168 [2024-07-14 20:27:02.207719] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190f6890 00:27:13.168 [2024-07-14 20:27:02.208690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:5187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.168 [2024-07-14 20:27:02.208718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:13.168 [2024-07-14 20:27:02.220461] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190fa7d8 00:27:13.168 [2024-07-14 20:27:02.222029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:9174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.168 [2024-07-14 20:27:02.222055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:13.168 [2024-07-14 20:27:02.228152] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190e6300 00:27:13.168 [2024-07-14 20:27:02.228802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:1160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.168 [2024-07-14 20:27:02.228853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:13.168 [2024-07-14 20:27:02.240807] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190feb58 00:27:13.168 [2024-07-14 20:27:02.242148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:23869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.168 [2024-07-14 20:27:02.242174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:13.460 [2024-07-14 20:27:02.252079] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190f5378 00:27:13.460 [2024-07-14 20:27:02.253574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:12446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.460 [2024-07-14 20:27:02.253602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:13.460 [2024-07-14 20:27:02.261874] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190df550 00:27:13.460 [2024-07-14 20:27:02.262798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:19363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.460 [2024-07-14 20:27:02.262826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:13.460 [2024-07-14 20:27:02.275439] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190e6fa8 00:27:13.460 [2024-07-14 20:27:02.276899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:22113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.460 [2024-07-14 20:27:02.276925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:13.460 [2024-07-14 20:27:02.285380] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190e3498 00:27:13.460 [2024-07-14 20:27:02.286620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.460 [2024-07-14 20:27:02.286648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:13.460 [2024-07-14 20:27:02.295604] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190e12d8 00:27:13.460 [2024-07-14 20:27:02.296658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:10528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.460 [2024-07-14 20:27:02.296685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:13.460 [2024-07-14 20:27:02.305125] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190e84c0 00:27:13.460 [2024-07-14 20:27:02.305970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:20879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.460 [2024-07-14 20:27:02.305996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:13.460 [2024-07-14 20:27:02.317135] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190f9b30 00:27:13.460 [2024-07-14 20:27:02.318528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:19399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.460 [2024-07-14 20:27:02.318555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:13.460 [2024-07-14 20:27:02.324512] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190e99d8 00:27:13.460 [2024-07-14 20:27:02.325102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:2683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.460 [2024-07-14 20:27:02.325127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:13.460 [2024-07-14 20:27:02.336862] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190fb480 00:27:13.460 [2024-07-14 20:27:02.338010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:11794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.460 [2024-07-14 20:27:02.338036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:13.460 [2024-07-14 20:27:02.346422] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190fcdd0 00:27:13.460 [2024-07-14 20:27:02.347499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:23952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.460 [2024-07-14 20:27:02.347526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:13.460 [2024-07-14 20:27:02.356380] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190f7970 00:27:13.460 [2024-07-14 20:27:02.357121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:18620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.460 [2024-07-14 20:27:02.357147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:13.460 [2024-07-14 20:27:02.365838] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190f35f0 00:27:13.460 [2024-07-14 20:27:02.366479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:13418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.460 [2024-07-14 20:27:02.366515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:13.460 [2024-07-14 20:27:02.377915] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190f9f68 00:27:13.460 [2024-07-14 20:27:02.379311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:15535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.460 [2024-07-14 20:27:02.379353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:13.460 [2024-07-14 20:27:02.388720] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190f2d80 00:27:13.460 [2024-07-14 20:27:02.390207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:3083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.460 [2024-07-14 20:27:02.390233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:13.460 [2024-07-14 20:27:02.396315] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190ef6a8 00:27:13.460 [2024-07-14 20:27:02.397029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:11571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.460 [2024-07-14 20:27:02.397056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:13.460 [2024-07-14 20:27:02.407411] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190f3a28 00:27:13.460 [2024-07-14 20:27:02.408225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:15502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.460 [2024-07-14 20:27:02.408252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:13.460 [2024-07-14 20:27:02.421013] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190efae0 00:27:13.460 [2024-07-14 20:27:02.422386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:18720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.460 [2024-07-14 20:27:02.422429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:13.460 [2024-07-14 20:27:02.431016] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190e95a0 00:27:13.460 [2024-07-14 20:27:02.432245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:18389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.460 [2024-07-14 20:27:02.432272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:13.460 [2024-07-14 20:27:02.441361] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190fcdd0 00:27:13.460 [2024-07-14 20:27:02.442414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.460 [2024-07-14 20:27:02.442440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:13.460 [2024-07-14 20:27:02.451561] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190f9b30 00:27:13.460 [2024-07-14 20:27:02.452584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:10039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.460 [2024-07-14 20:27:02.452611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:13.460 [2024-07-14 20:27:02.464405] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190fb480 00:27:13.460 [2024-07-14 20:27:02.465962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:24701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.460 [2024-07-14 20:27:02.465989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:13.460 [2024-07-14 20:27:02.472202] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190fbcf0 00:27:13.460 [2024-07-14 20:27:02.473022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:6196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.460 [2024-07-14 20:27:02.473050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:13.460 [2024-07-14 20:27:02.485870] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190e73e0 00:27:13.460 [2024-07-14 20:27:02.487230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:10265 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.460 [2024-07-14 20:27:02.487261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:13.460 [2024-07-14 20:27:02.495837] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190ee5c8 00:27:13.460 [2024-07-14 20:27:02.496877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.460 [2024-07-14 20:27:02.496910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:13.460 [2024-07-14 20:27:02.506176] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190e27f0 00:27:13.460 [2024-07-14 20:27:02.506784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:17105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.460 [2024-07-14 20:27:02.506811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:13.460 [2024-07-14 20:27:02.516238] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190f96f8 00:27:13.460 [2024-07-14 20:27:02.516726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:15070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.460 [2024-07-14 20:27:02.516761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:13.460 [2024-07-14 20:27:02.528345] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190e9e10 00:27:13.461 [2024-07-14 20:27:02.529664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.461 [2024-07-14 20:27:02.529689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.461 [2024-07-14 20:27:02.538472] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190e99d8 00:27:13.461 [2024-07-14 20:27:02.540078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:1148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.461 [2024-07-14 20:27:02.540103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:13.720 [2024-07-14 20:27:02.549626] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190ebfd0 00:27:13.720 [2024-07-14 20:27:02.551321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:22805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.720 [2024-07-14 20:27:02.551361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.720 [2024-07-14 20:27:02.556650] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190f4298 00:27:13.720 [2024-07-14 20:27:02.557471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:2552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.720 [2024-07-14 20:27:02.557496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:13.720 [2024-07-14 20:27:02.567283] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190fef90 00:27:13.720 [2024-07-14 20:27:02.567738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:11971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.720 [2024-07-14 20:27:02.567770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:13.720 [2024-07-14 20:27:02.577864] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190f3e60 00:27:13.720 [2024-07-14 20:27:02.578466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:17635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.720 [2024-07-14 20:27:02.578492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:13.720 [2024-07-14 20:27:02.588103] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190f3e60 00:27:13.720 [2024-07-14 20:27:02.588994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:7988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.720 [2024-07-14 20:27:02.589020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:13.720 [2024-07-14 20:27:02.597745] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190fef90 00:27:13.720 [2024-07-14 20:27:02.598457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:8726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.720 [2024-07-14 20:27:02.598483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:13.720 [2024-07-14 20:27:02.607615] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190fc560 00:27:13.720 [2024-07-14 20:27:02.608063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:9653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.720 [2024-07-14 20:27:02.608085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:13.720 [2024-07-14 20:27:02.617712] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190f1868 00:27:13.720 [2024-07-14 20:27:02.618285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:11107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.720 [2024-07-14 20:27:02.618311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:13.720 [2024-07-14 20:27:02.627997] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190f3e60 00:27:13.720 [2024-07-14 20:27:02.628683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:18948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.720 [2024-07-14 20:27:02.628710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:13.720 [2024-07-14 20:27:02.637457] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190df550 00:27:13.720 [2024-07-14 20:27:02.638474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:20544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.720 [2024-07-14 20:27:02.638500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:13.720 [2024-07-14 20:27:02.647794] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190fb8b8 00:27:13.720 [2024-07-14 20:27:02.648720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:4736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.720 [2024-07-14 20:27:02.648746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:13.720 [2024-07-14 20:27:02.657776] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190e3060 00:27:13.720 [2024-07-14 20:27:02.658557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:4476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.720 [2024-07-14 20:27:02.658583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:13.720 [2024-07-14 20:27:02.668615] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190e01f8 00:27:13.720 [2024-07-14 20:27:02.669112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.720 [2024-07-14 20:27:02.669137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:13.720 [2024-07-14 20:27:02.681815] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190e88f8 00:27:13.720 [2024-07-14 20:27:02.683478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:16363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.720 [2024-07-14 20:27:02.683505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:13.720 [2024-07-14 20:27:02.689063] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190f5378 00:27:13.720 [2024-07-14 20:27:02.689854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:17574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.720 [2024-07-14 20:27:02.689898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:13.720 [2024-07-14 20:27:02.699352] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190f7da8 00:27:13.720 [2024-07-14 20:27:02.700095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:10778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.721 [2024-07-14 20:27:02.700122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:13.721 [2024-07-14 20:27:02.710780] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190f96f8 00:27:13.721 [2024-07-14 20:27:02.712153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:22997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.721 [2024-07-14 20:27:02.712179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:13.721 [2024-07-14 20:27:02.719990] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190f0788 00:27:13.721 [2024-07-14 20:27:02.721058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:20731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.721 [2024-07-14 20:27:02.721084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:13.721 [2024-07-14 20:27:02.729564] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190fcdd0 00:27:13.721 [2024-07-14 20:27:02.730463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:6041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.721 [2024-07-14 20:27:02.730489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:13.721 [2024-07-14 20:27:02.739192] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190e23b8 00:27:13.721 [2024-07-14 20:27:02.740259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:12186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.721 [2024-07-14 20:27:02.740285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:13.721 [2024-07-14 20:27:02.751745] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190f46d0 00:27:13.721 [2024-07-14 20:27:02.753272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.721 [2024-07-14 20:27:02.753299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:13.721 [2024-07-14 20:27:02.759053] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190e6b70 00:27:13.721 [2024-07-14 20:27:02.759793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:1480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.721 [2024-07-14 20:27:02.759818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:13.721 [2024-07-14 20:27:02.770796] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190fe720 00:27:13.721 [2024-07-14 20:27:02.772151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:8379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.721 [2024-07-14 20:27:02.772177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:13.721 [2024-07-14 20:27:02.780084] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190ea248 00:27:13.721 [2024-07-14 20:27:02.781070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:10700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.721 [2024-07-14 20:27:02.781096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:13.721 [2024-07-14 20:27:02.789560] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190fe2e8 00:27:13.721 [2024-07-14 20:27:02.790384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:12821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.721 [2024-07-14 20:27:02.790409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:13.721 [2024-07-14 20:27:02.798835] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190e4140 00:27:13.721 [2024-07-14 20:27:02.799690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:20757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.721 [2024-07-14 20:27:02.799716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:13.980 [2024-07-14 20:27:02.809896] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190e0ea0 00:27:13.980 [2024-07-14 20:27:02.810492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:2397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.980 [2024-07-14 20:27:02.810519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:13.980 [2024-07-14 20:27:02.822587] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190f92c0 00:27:13.980 [2024-07-14 20:27:02.823839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:19434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.980 [2024-07-14 20:27:02.823877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:13.980 [2024-07-14 20:27:02.832401] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190fd208 00:27:13.980 [2024-07-14 20:27:02.833520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:24053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.980 [2024-07-14 20:27:02.833546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.980 [2024-07-14 20:27:02.842335] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190f2948 00:27:13.980 [2024-07-14 20:27:02.843076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:14360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.980 [2024-07-14 20:27:02.843106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:13.980 [2024-07-14 20:27:02.851363] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190e38d0 00:27:13.980 [2024-07-14 20:27:02.852285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:19648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.980 [2024-07-14 20:27:02.852312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.980 [2024-07-14 20:27:02.860992] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190e7818 00:27:13.980 [2024-07-14 20:27:02.861680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:9892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.980 [2024-07-14 20:27:02.861706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.980 [2024-07-14 20:27:02.874094] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190ef6a8 00:27:13.980 [2024-07-14 20:27:02.875785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:14159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.980 [2024-07-14 20:27:02.875812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.980 [2024-07-14 20:27:02.881189] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190ed920 00:27:13.980 [2024-07-14 20:27:02.881937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:7289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.980 [2024-07-14 20:27:02.881963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:13.980 [2024-07-14 20:27:02.893024] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190e8d30 00:27:13.980 [2024-07-14 20:27:02.894233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:21014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.980 [2024-07-14 20:27:02.894258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:13.980 [2024-07-14 20:27:02.901748] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190e5220 00:27:13.980 [2024-07-14 20:27:02.903355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.980 [2024-07-14 20:27:02.903381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:13.980 [2024-07-14 20:27:02.910226] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190fda78 00:27:13.980 [2024-07-14 20:27:02.910799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:11664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.980 [2024-07-14 20:27:02.910825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:13.980 [2024-07-14 20:27:02.921256] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190fb8b8 00:27:13.980 [2024-07-14 20:27:02.921992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:15945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.980 [2024-07-14 20:27:02.922018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:13.980 [2024-07-14 20:27:02.931981] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190e4de8 00:27:13.980 [2024-07-14 20:27:02.932820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:18048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.980 [2024-07-14 20:27:02.932845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:13.980 [2024-07-14 20:27:02.943718] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190ea680 00:27:13.980 [2024-07-14 20:27:02.945095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:19407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.980 [2024-07-14 20:27:02.945120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:13.980 [2024-07-14 20:27:02.950594] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5b50) with pdu=0x2000190e5ec8 00:27:13.981 [2024-07-14 20:27:02.951323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:5392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.981 [2024-07-14 20:27:02.951365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:13.981 00:27:13.981 Latency(us) 00:27:13.981 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:13.981 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:13.981 nvme0n1 : 2.01 24789.14 96.83 0.00 0.00 5158.61 1995.87 14120.03 00:27:13.981 =================================================================================================================== 00:27:13.981 Total : 24789.14 96.83 0.00 0.00 5158.61 1995.87 14120.03 00:27:13.981 0 00:27:13.981 20:27:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:13.981 20:27:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:13.981 20:27:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:13.981 20:27:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:13.981 | .driver_specific 00:27:13.981 | .nvme_error 00:27:13.981 | .status_code 00:27:13.981 | .command_transient_transport_error' 00:27:14.259 20:27:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 195 > 0 )) 00:27:14.259 20:27:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 112350 00:27:14.259 20:27:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 112350 ']' 00:27:14.259 20:27:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 112350 00:27:14.259 20:27:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:27:14.259 20:27:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:14.259 20:27:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 112350 00:27:14.259 20:27:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:14.259 20:27:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:14.259 killing process with pid 112350 00:27:14.259 20:27:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 112350' 00:27:14.259 20:27:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 112350 00:27:14.259 Received shutdown signal, test time was about 2.000000 seconds 00:27:14.259 00:27:14.259 Latency(us) 00:27:14.259 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:14.259 =================================================================================================================== 00:27:14.259 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:14.259 20:27:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 112350 00:27:14.517 20:27:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:27:14.517 20:27:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:14.517 20:27:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:27:14.517 20:27:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:27:14.517 20:27:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:27:14.517 20:27:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=112435 00:27:14.517 20:27:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 112435 /var/tmp/bperf.sock 00:27:14.517 20:27:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:27:14.517 20:27:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 112435 ']' 00:27:14.517 20:27:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:14.517 20:27:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:14.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:14.517 20:27:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:14.517 20:27:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:14.517 20:27:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:14.775 [2024-07-14 20:27:03.606144] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:27:14.775 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:14.775 Zero copy mechanism will not be used. 00:27:14.775 [2024-07-14 20:27:03.606251] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112435 ] 00:27:14.775 [2024-07-14 20:27:03.740029] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:14.775 [2024-07-14 20:27:03.833895] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:15.708 20:27:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:15.708 20:27:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:27:15.708 20:27:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:15.708 20:27:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:15.708 20:27:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:15.708 20:27:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.708 20:27:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:15.708 20:27:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.708 20:27:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:15.708 20:27:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:16.281 nvme0n1 00:27:16.281 20:27:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:16.281 20:27:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.281 20:27:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:16.281 20:27:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.281 20:27:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:16.281 20:27:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:16.281 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:16.281 Zero copy mechanism will not be used. 00:27:16.281 Running I/O for 2 seconds... 00:27:16.281 [2024-07-14 20:27:05.222160] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.281 [2024-07-14 20:27:05.222428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.281 [2024-07-14 20:27:05.222456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.281 [2024-07-14 20:27:05.226719] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.281 [2024-07-14 20:27:05.227000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.281 [2024-07-14 20:27:05.227023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.281 [2024-07-14 20:27:05.231188] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.281 [2024-07-14 20:27:05.231454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.281 [2024-07-14 20:27:05.231479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.281 [2024-07-14 20:27:05.235559] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.281 [2024-07-14 20:27:05.235784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.281 [2024-07-14 20:27:05.235805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.281 [2024-07-14 20:27:05.240144] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.281 [2024-07-14 20:27:05.240367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.281 [2024-07-14 20:27:05.240392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.281 [2024-07-14 20:27:05.244404] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.281 [2024-07-14 20:27:05.244626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.281 [2024-07-14 20:27:05.244651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.281 [2024-07-14 20:27:05.248704] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.281 [2024-07-14 20:27:05.248939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.281 [2024-07-14 20:27:05.248959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.281 [2024-07-14 20:27:05.253104] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.281 [2024-07-14 20:27:05.253329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.281 [2024-07-14 20:27:05.253353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.281 [2024-07-14 20:27:05.257476] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.281 [2024-07-14 20:27:05.257713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.281 [2024-07-14 20:27:05.257737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.281 [2024-07-14 20:27:05.261757] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.281 [2024-07-14 20:27:05.261992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.281 [2024-07-14 20:27:05.262017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.281 [2024-07-14 20:27:05.266025] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.281 [2024-07-14 20:27:05.266249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.281 [2024-07-14 20:27:05.266268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.281 [2024-07-14 20:27:05.270334] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.281 [2024-07-14 20:27:05.270569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.281 [2024-07-14 20:27:05.270588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.281 [2024-07-14 20:27:05.274685] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.281 [2024-07-14 20:27:05.274986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.281 [2024-07-14 20:27:05.275007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.281 [2024-07-14 20:27:05.279369] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.281 [2024-07-14 20:27:05.279648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.281 [2024-07-14 20:27:05.279668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.281 [2024-07-14 20:27:05.283966] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.281 [2024-07-14 20:27:05.284217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.281 [2024-07-14 20:27:05.284241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.281 [2024-07-14 20:27:05.288428] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.281 [2024-07-14 20:27:05.288677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.281 [2024-07-14 20:27:05.288701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.281 [2024-07-14 20:27:05.292932] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.281 [2024-07-14 20:27:05.293182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.281 [2024-07-14 20:27:05.293206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.281 [2024-07-14 20:27:05.297437] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.281 [2024-07-14 20:27:05.297699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.281 [2024-07-14 20:27:05.297723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.281 [2024-07-14 20:27:05.301964] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.281 [2024-07-14 20:27:05.302227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.281 [2024-07-14 20:27:05.302246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.281 [2024-07-14 20:27:05.306454] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.281 [2024-07-14 20:27:05.306745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.281 [2024-07-14 20:27:05.306765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.281 [2024-07-14 20:27:05.310990] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.281 [2024-07-14 20:27:05.311233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.281 [2024-07-14 20:27:05.311258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.281 [2024-07-14 20:27:05.315494] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.281 [2024-07-14 20:27:05.315744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.281 [2024-07-14 20:27:05.315768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.281 [2024-07-14 20:27:05.319992] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.281 [2024-07-14 20:27:05.320242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.281 [2024-07-14 20:27:05.320266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.281 [2024-07-14 20:27:05.324413] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.281 [2024-07-14 20:27:05.324677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.281 [2024-07-14 20:27:05.324701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.281 [2024-07-14 20:27:05.328996] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.281 [2024-07-14 20:27:05.329259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.281 [2024-07-14 20:27:05.329282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.281 [2024-07-14 20:27:05.333585] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.281 [2024-07-14 20:27:05.333836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.281 [2024-07-14 20:27:05.333868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.281 [2024-07-14 20:27:05.338147] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.281 [2024-07-14 20:27:05.338397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.281 [2024-07-14 20:27:05.338421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.281 [2024-07-14 20:27:05.342631] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.281 [2024-07-14 20:27:05.342890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.281 [2024-07-14 20:27:05.342910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.281 [2024-07-14 20:27:05.347267] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.281 [2024-07-14 20:27:05.347551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.281 [2024-07-14 20:27:05.347587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.281 [2024-07-14 20:27:05.351795] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.281 [2024-07-14 20:27:05.352057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.281 [2024-07-14 20:27:05.352081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.281 [2024-07-14 20:27:05.356568] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.281 [2024-07-14 20:27:05.356846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.281 [2024-07-14 20:27:05.356878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.281 [2024-07-14 20:27:05.361454] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.281 [2024-07-14 20:27:05.361706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.281 [2024-07-14 20:27:05.361730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.541 [2024-07-14 20:27:05.366207] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.541 [2024-07-14 20:27:05.366459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.541 [2024-07-14 20:27:05.366483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.541 [2024-07-14 20:27:05.370971] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.541 [2024-07-14 20:27:05.371297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.541 [2024-07-14 20:27:05.371322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.541 [2024-07-14 20:27:05.375513] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.541 [2024-07-14 20:27:05.375761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.541 [2024-07-14 20:27:05.375785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.541 [2024-07-14 20:27:05.380030] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.541 [2024-07-14 20:27:05.380282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.541 [2024-07-14 20:27:05.380312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.541 [2024-07-14 20:27:05.384577] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.541 [2024-07-14 20:27:05.384826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.541 [2024-07-14 20:27:05.384850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.541 [2024-07-14 20:27:05.389047] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.541 [2024-07-14 20:27:05.389299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.541 [2024-07-14 20:27:05.389323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.541 [2024-07-14 20:27:05.393535] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.541 [2024-07-14 20:27:05.393784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.541 [2024-07-14 20:27:05.393808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.541 [2024-07-14 20:27:05.398019] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.542 [2024-07-14 20:27:05.398269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.542 [2024-07-14 20:27:05.398292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.542 [2024-07-14 20:27:05.402487] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.542 [2024-07-14 20:27:05.402748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.542 [2024-07-14 20:27:05.402773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.542 [2024-07-14 20:27:05.407020] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.542 [2024-07-14 20:27:05.407275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.542 [2024-07-14 20:27:05.407299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.542 [2024-07-14 20:27:05.411544] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.542 [2024-07-14 20:27:05.411806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.542 [2024-07-14 20:27:05.411829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.542 [2024-07-14 20:27:05.416032] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.542 [2024-07-14 20:27:05.416282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.542 [2024-07-14 20:27:05.416305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.542 [2024-07-14 20:27:05.420529] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.542 [2024-07-14 20:27:05.420779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.542 [2024-07-14 20:27:05.420803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.542 [2024-07-14 20:27:05.425055] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.542 [2024-07-14 20:27:05.425307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.542 [2024-07-14 20:27:05.425330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.542 [2024-07-14 20:27:05.429483] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.542 [2024-07-14 20:27:05.429732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.542 [2024-07-14 20:27:05.429756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.542 [2024-07-14 20:27:05.434031] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.542 [2024-07-14 20:27:05.434281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.542 [2024-07-14 20:27:05.434304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.542 [2024-07-14 20:27:05.438463] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.542 [2024-07-14 20:27:05.438712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.542 [2024-07-14 20:27:05.438735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.542 [2024-07-14 20:27:05.443015] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.542 [2024-07-14 20:27:05.443290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.542 [2024-07-14 20:27:05.443315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.542 [2024-07-14 20:27:05.447566] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.542 [2024-07-14 20:27:05.447816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.542 [2024-07-14 20:27:05.447839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.542 [2024-07-14 20:27:05.452102] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.542 [2024-07-14 20:27:05.452353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.542 [2024-07-14 20:27:05.452376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.542 [2024-07-14 20:27:05.456622] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.542 [2024-07-14 20:27:05.456885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.542 [2024-07-14 20:27:05.456908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.542 [2024-07-14 20:27:05.461202] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.542 [2024-07-14 20:27:05.461452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.542 [2024-07-14 20:27:05.461476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.542 [2024-07-14 20:27:05.465773] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.542 [2024-07-14 20:27:05.466058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.542 [2024-07-14 20:27:05.466083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.542 [2024-07-14 20:27:05.470338] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.542 [2024-07-14 20:27:05.470596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.542 [2024-07-14 20:27:05.470620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.542 [2024-07-14 20:27:05.474796] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.542 [2024-07-14 20:27:05.475108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.542 [2024-07-14 20:27:05.475133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.542 [2024-07-14 20:27:05.479546] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.542 [2024-07-14 20:27:05.479806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.542 [2024-07-14 20:27:05.479830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.542 [2024-07-14 20:27:05.484230] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.542 [2024-07-14 20:27:05.484487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.542 [2024-07-14 20:27:05.484511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.542 [2024-07-14 20:27:05.488737] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.542 [2024-07-14 20:27:05.489004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.542 [2024-07-14 20:27:05.489028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.542 [2024-07-14 20:27:05.493298] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.542 [2024-07-14 20:27:05.493559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.542 [2024-07-14 20:27:05.493582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.542 [2024-07-14 20:27:05.497807] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.542 [2024-07-14 20:27:05.498092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.542 [2024-07-14 20:27:05.498131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.542 [2024-07-14 20:27:05.502306] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.542 [2024-07-14 20:27:05.502557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.542 [2024-07-14 20:27:05.502580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.542 [2024-07-14 20:27:05.506952] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.542 [2024-07-14 20:27:05.507260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.542 [2024-07-14 20:27:05.507287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.542 [2024-07-14 20:27:05.511474] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.542 [2024-07-14 20:27:05.511722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.542 [2024-07-14 20:27:05.511746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.542 [2024-07-14 20:27:05.516131] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.542 [2024-07-14 20:27:05.516395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.542 [2024-07-14 20:27:05.516418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.542 [2024-07-14 20:27:05.520672] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.542 [2024-07-14 20:27:05.520939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.542 [2024-07-14 20:27:05.520963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.542 [2024-07-14 20:27:05.525188] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.542 [2024-07-14 20:27:05.525438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.542 [2024-07-14 20:27:05.525472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.543 [2024-07-14 20:27:05.529726] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.543 [2024-07-14 20:27:05.529999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.543 [2024-07-14 20:27:05.530023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.543 [2024-07-14 20:27:05.534278] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.543 [2024-07-14 20:27:05.534540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.543 [2024-07-14 20:27:05.534564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.543 [2024-07-14 20:27:05.538778] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.543 [2024-07-14 20:27:05.539066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.543 [2024-07-14 20:27:05.539091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.543 [2024-07-14 20:27:05.543311] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.543 [2024-07-14 20:27:05.543576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.543 [2024-07-14 20:27:05.543600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.543 [2024-07-14 20:27:05.547878] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.543 [2024-07-14 20:27:05.548138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.543 [2024-07-14 20:27:05.548161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.543 [2024-07-14 20:27:05.552408] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.543 [2024-07-14 20:27:05.552657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.543 [2024-07-14 20:27:05.552680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.543 [2024-07-14 20:27:05.556912] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.543 [2024-07-14 20:27:05.557163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.543 [2024-07-14 20:27:05.557186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.543 [2024-07-14 20:27:05.561413] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.543 [2024-07-14 20:27:05.561676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.543 [2024-07-14 20:27:05.561700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.543 [2024-07-14 20:27:05.565920] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.543 [2024-07-14 20:27:05.566183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.543 [2024-07-14 20:27:05.566206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.543 [2024-07-14 20:27:05.570517] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.543 [2024-07-14 20:27:05.570781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.543 [2024-07-14 20:27:05.570804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.543 [2024-07-14 20:27:05.575087] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.543 [2024-07-14 20:27:05.575341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.543 [2024-07-14 20:27:05.575395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.543 [2024-07-14 20:27:05.579698] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.543 [2024-07-14 20:27:05.579980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.543 [2024-07-14 20:27:05.580004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.543 [2024-07-14 20:27:05.584460] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.543 [2024-07-14 20:27:05.584729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.543 [2024-07-14 20:27:05.584753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.543 [2024-07-14 20:27:05.589268] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.543 [2024-07-14 20:27:05.589523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.543 [2024-07-14 20:27:05.589549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.543 [2024-07-14 20:27:05.593958] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.543 [2024-07-14 20:27:05.594215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.543 [2024-07-14 20:27:05.594239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.543 [2024-07-14 20:27:05.598586] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.543 [2024-07-14 20:27:05.598844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.543 [2024-07-14 20:27:05.598891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.543 [2024-07-14 20:27:05.603345] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.543 [2024-07-14 20:27:05.603603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.543 [2024-07-14 20:27:05.603628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.543 [2024-07-14 20:27:05.608021] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.543 [2024-07-14 20:27:05.608278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.543 [2024-07-14 20:27:05.608302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.543 [2024-07-14 20:27:05.612703] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.543 [2024-07-14 20:27:05.612962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.543 [2024-07-14 20:27:05.612986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.543 [2024-07-14 20:27:05.617383] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.543 [2024-07-14 20:27:05.617644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.543 [2024-07-14 20:27:05.617668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.543 [2024-07-14 20:27:05.621999] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.543 [2024-07-14 20:27:05.622249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.543 [2024-07-14 20:27:05.622273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.802 [2024-07-14 20:27:05.627159] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.802 [2024-07-14 20:27:05.627511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.802 [2024-07-14 20:27:05.627543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.802 [2024-07-14 20:27:05.631970] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.802 [2024-07-14 20:27:05.632219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.803 [2024-07-14 20:27:05.632242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.803 [2024-07-14 20:27:05.636481] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.803 [2024-07-14 20:27:05.636736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.803 [2024-07-14 20:27:05.636760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.803 [2024-07-14 20:27:05.640985] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.803 [2024-07-14 20:27:05.641237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.803 [2024-07-14 20:27:05.641260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.803 [2024-07-14 20:27:05.645485] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.803 [2024-07-14 20:27:05.645736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.803 [2024-07-14 20:27:05.645759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.803 [2024-07-14 20:27:05.650081] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.803 [2024-07-14 20:27:05.650346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.803 [2024-07-14 20:27:05.650370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.803 [2024-07-14 20:27:05.654587] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.803 [2024-07-14 20:27:05.654836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.803 [2024-07-14 20:27:05.654868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.803 [2024-07-14 20:27:05.659226] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.803 [2024-07-14 20:27:05.659541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.803 [2024-07-14 20:27:05.659565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.803 [2024-07-14 20:27:05.663861] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.803 [2024-07-14 20:27:05.664148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.803 [2024-07-14 20:27:05.664203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.803 [2024-07-14 20:27:05.668462] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.803 [2024-07-14 20:27:05.668712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.803 [2024-07-14 20:27:05.668735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.803 [2024-07-14 20:27:05.673011] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.803 [2024-07-14 20:27:05.673273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.803 [2024-07-14 20:27:05.673297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.803 [2024-07-14 20:27:05.677614] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.803 [2024-07-14 20:27:05.677875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.803 [2024-07-14 20:27:05.677924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.803 [2024-07-14 20:27:05.682257] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.803 [2024-07-14 20:27:05.682508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.803 [2024-07-14 20:27:05.682532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.803 [2024-07-14 20:27:05.686759] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.803 [2024-07-14 20:27:05.687047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.803 [2024-07-14 20:27:05.687071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.803 [2024-07-14 20:27:05.691330] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.803 [2024-07-14 20:27:05.691598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.803 [2024-07-14 20:27:05.691621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.803 [2024-07-14 20:27:05.695824] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.803 [2024-07-14 20:27:05.696097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.803 [2024-07-14 20:27:05.696120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.803 [2024-07-14 20:27:05.700424] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.803 [2024-07-14 20:27:05.700674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.803 [2024-07-14 20:27:05.700698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.803 [2024-07-14 20:27:05.704916] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.803 [2024-07-14 20:27:05.705165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.803 [2024-07-14 20:27:05.705188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.803 [2024-07-14 20:27:05.709372] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.803 [2024-07-14 20:27:05.709622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.803 [2024-07-14 20:27:05.709646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.803 [2024-07-14 20:27:05.713951] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.803 [2024-07-14 20:27:05.714201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.803 [2024-07-14 20:27:05.714224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.803 [2024-07-14 20:27:05.718435] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.803 [2024-07-14 20:27:05.718709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.803 [2024-07-14 20:27:05.718734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.803 [2024-07-14 20:27:05.723032] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.803 [2024-07-14 20:27:05.723294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.803 [2024-07-14 20:27:05.723318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.803 [2024-07-14 20:27:05.727593] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.803 [2024-07-14 20:27:05.727843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.803 [2024-07-14 20:27:05.727875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.803 [2024-07-14 20:27:05.732244] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.803 [2024-07-14 20:27:05.732495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.803 [2024-07-14 20:27:05.732518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.803 [2024-07-14 20:27:05.736764] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.803 [2024-07-14 20:27:05.737044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.803 [2024-07-14 20:27:05.737068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.803 [2024-07-14 20:27:05.741251] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.803 [2024-07-14 20:27:05.741501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.803 [2024-07-14 20:27:05.741525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.803 [2024-07-14 20:27:05.745937] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.803 [2024-07-14 20:27:05.746195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.803 [2024-07-14 20:27:05.746220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.803 [2024-07-14 20:27:05.750580] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.803 [2024-07-14 20:27:05.750837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.803 [2024-07-14 20:27:05.750870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.803 [2024-07-14 20:27:05.755357] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.803 [2024-07-14 20:27:05.755629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.803 [2024-07-14 20:27:05.755653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.803 [2024-07-14 20:27:05.760043] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.803 [2024-07-14 20:27:05.760301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.804 [2024-07-14 20:27:05.760326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.804 [2024-07-14 20:27:05.764695] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.804 [2024-07-14 20:27:05.764965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.804 [2024-07-14 20:27:05.764990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.804 [2024-07-14 20:27:05.769361] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.804 [2024-07-14 20:27:05.769615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.804 [2024-07-14 20:27:05.769640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.804 [2024-07-14 20:27:05.773931] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.804 [2024-07-14 20:27:05.774180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.804 [2024-07-14 20:27:05.774205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.804 [2024-07-14 20:27:05.779051] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.804 [2024-07-14 20:27:05.779338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.804 [2024-07-14 20:27:05.779376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.804 [2024-07-14 20:27:05.784152] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.804 [2024-07-14 20:27:05.784386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.804 [2024-07-14 20:27:05.784409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.804 [2024-07-14 20:27:05.788602] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.804 [2024-07-14 20:27:05.788850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.804 [2024-07-14 20:27:05.788882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.804 [2024-07-14 20:27:05.793214] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.804 [2024-07-14 20:27:05.793476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.804 [2024-07-14 20:27:05.793500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.804 [2024-07-14 20:27:05.797743] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.804 [2024-07-14 20:27:05.798005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.804 [2024-07-14 20:27:05.798029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.804 [2024-07-14 20:27:05.802293] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.804 [2024-07-14 20:27:05.802542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.804 [2024-07-14 20:27:05.802566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.804 [2024-07-14 20:27:05.806812] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.804 [2024-07-14 20:27:05.807124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.804 [2024-07-14 20:27:05.807149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.804 [2024-07-14 20:27:05.811395] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.804 [2024-07-14 20:27:05.811646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.804 [2024-07-14 20:27:05.811671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.804 [2024-07-14 20:27:05.815902] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.804 [2024-07-14 20:27:05.816168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.804 [2024-07-14 20:27:05.816191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.804 [2024-07-14 20:27:05.820517] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.804 [2024-07-14 20:27:05.820786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.804 [2024-07-14 20:27:05.820811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.804 [2024-07-14 20:27:05.825027] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.804 [2024-07-14 20:27:05.825275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.804 [2024-07-14 20:27:05.825298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.804 [2024-07-14 20:27:05.829468] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.804 [2024-07-14 20:27:05.829719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.804 [2024-07-14 20:27:05.829743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.804 [2024-07-14 20:27:05.834026] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.804 [2024-07-14 20:27:05.834295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.804 [2024-07-14 20:27:05.834319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.804 [2024-07-14 20:27:05.838511] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.804 [2024-07-14 20:27:05.838774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.804 [2024-07-14 20:27:05.838805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.804 [2024-07-14 20:27:05.843089] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.804 [2024-07-14 20:27:05.843395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.804 [2024-07-14 20:27:05.843435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.804 [2024-07-14 20:27:05.847647] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.804 [2024-07-14 20:27:05.847910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.804 [2024-07-14 20:27:05.847942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.804 [2024-07-14 20:27:05.852233] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.804 [2024-07-14 20:27:05.852497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.804 [2024-07-14 20:27:05.852520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.804 [2024-07-14 20:27:05.856768] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.804 [2024-07-14 20:27:05.857043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.804 [2024-07-14 20:27:05.857067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.804 [2024-07-14 20:27:05.861208] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.804 [2024-07-14 20:27:05.861460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.804 [2024-07-14 20:27:05.861484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.804 [2024-07-14 20:27:05.865737] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.804 [2024-07-14 20:27:05.865997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.804 [2024-07-14 20:27:05.866021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.804 [2024-07-14 20:27:05.870246] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.804 [2024-07-14 20:27:05.870508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.804 [2024-07-14 20:27:05.870532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.804 [2024-07-14 20:27:05.874769] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.804 [2024-07-14 20:27:05.875088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.804 [2024-07-14 20:27:05.875114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.804 [2024-07-14 20:27:05.879400] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.804 [2024-07-14 20:27:05.879650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.804 [2024-07-14 20:27:05.879674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.804 [2024-07-14 20:27:05.884456] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:16.804 [2024-07-14 20:27:05.884746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.804 [2024-07-14 20:27:05.884772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.064 [2024-07-14 20:27:05.889355] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.064 [2024-07-14 20:27:05.889616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.064 [2024-07-14 20:27:05.889641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.064 [2024-07-14 20:27:05.894154] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.064 [2024-07-14 20:27:05.894406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.064 [2024-07-14 20:27:05.894430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.064 [2024-07-14 20:27:05.898631] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.064 [2024-07-14 20:27:05.898894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.064 [2024-07-14 20:27:05.898952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.064 [2024-07-14 20:27:05.903188] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.064 [2024-07-14 20:27:05.903503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.064 [2024-07-14 20:27:05.903528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.064 [2024-07-14 20:27:05.907834] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.064 [2024-07-14 20:27:05.908113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.064 [2024-07-14 20:27:05.908169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.064 [2024-07-14 20:27:05.912394] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.064 [2024-07-14 20:27:05.912644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.064 [2024-07-14 20:27:05.912667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.064 [2024-07-14 20:27:05.916876] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.064 [2024-07-14 20:27:05.917125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.064 [2024-07-14 20:27:05.917148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.064 [2024-07-14 20:27:05.921445] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.064 [2024-07-14 20:27:05.921710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.064 [2024-07-14 20:27:05.921734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.064 [2024-07-14 20:27:05.925938] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.064 [2024-07-14 20:27:05.926199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.064 [2024-07-14 20:27:05.926222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.064 [2024-07-14 20:27:05.930465] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.064 [2024-07-14 20:27:05.930715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.064 [2024-07-14 20:27:05.930739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.064 [2024-07-14 20:27:05.935051] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.064 [2024-07-14 20:27:05.935295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.065 [2024-07-14 20:27:05.935370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.065 [2024-07-14 20:27:05.939581] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.065 [2024-07-14 20:27:05.939831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.065 [2024-07-14 20:27:05.939865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.065 [2024-07-14 20:27:05.944095] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.065 [2024-07-14 20:27:05.944345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.065 [2024-07-14 20:27:05.944369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.065 [2024-07-14 20:27:05.948510] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.065 [2024-07-14 20:27:05.948759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.065 [2024-07-14 20:27:05.948782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.065 [2024-07-14 20:27:05.952982] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.065 [2024-07-14 20:27:05.953233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.065 [2024-07-14 20:27:05.953256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.065 [2024-07-14 20:27:05.957543] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.065 [2024-07-14 20:27:05.957791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.065 [2024-07-14 20:27:05.957820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.065 [2024-07-14 20:27:05.962075] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.065 [2024-07-14 20:27:05.962324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.065 [2024-07-14 20:27:05.962348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.065 [2024-07-14 20:27:05.966635] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.065 [2024-07-14 20:27:05.966900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.065 [2024-07-14 20:27:05.966948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.065 [2024-07-14 20:27:05.971320] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.065 [2024-07-14 20:27:05.971604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.065 [2024-07-14 20:27:05.971643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.065 [2024-07-14 20:27:05.975944] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.065 [2024-07-14 20:27:05.976214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.065 [2024-07-14 20:27:05.976238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.065 [2024-07-14 20:27:05.980554] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.065 [2024-07-14 20:27:05.980811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.065 [2024-07-14 20:27:05.980835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.065 [2024-07-14 20:27:05.985178] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.065 [2024-07-14 20:27:05.985415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.065 [2024-07-14 20:27:05.985438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.065 [2024-07-14 20:27:05.989635] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.065 [2024-07-14 20:27:05.989898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.065 [2024-07-14 20:27:05.989922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.065 [2024-07-14 20:27:05.994133] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.065 [2024-07-14 20:27:05.994383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.065 [2024-07-14 20:27:05.994407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.065 [2024-07-14 20:27:05.998548] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.065 [2024-07-14 20:27:05.998799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.065 [2024-07-14 20:27:05.998823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.065 [2024-07-14 20:27:06.003132] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.065 [2024-07-14 20:27:06.003418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.065 [2024-07-14 20:27:06.003442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.065 [2024-07-14 20:27:06.007703] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.065 [2024-07-14 20:27:06.007969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.065 [2024-07-14 20:27:06.007992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.065 [2024-07-14 20:27:06.012281] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.065 [2024-07-14 20:27:06.012533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.065 [2024-07-14 20:27:06.012556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.065 [2024-07-14 20:27:06.016769] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.065 [2024-07-14 20:27:06.017044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.065 [2024-07-14 20:27:06.017069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.065 [2024-07-14 20:27:06.021257] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.065 [2024-07-14 20:27:06.021507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.065 [2024-07-14 20:27:06.021531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.065 [2024-07-14 20:27:06.025799] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.065 [2024-07-14 20:27:06.026074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.065 [2024-07-14 20:27:06.026098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.065 [2024-07-14 20:27:06.030317] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.065 [2024-07-14 20:27:06.030576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.065 [2024-07-14 20:27:06.030600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.065 [2024-07-14 20:27:06.035244] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.065 [2024-07-14 20:27:06.035552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.065 [2024-07-14 20:27:06.035592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.065 [2024-07-14 20:27:06.040249] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.065 [2024-07-14 20:27:06.040513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.065 [2024-07-14 20:27:06.040537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.065 [2024-07-14 20:27:06.044806] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.065 [2024-07-14 20:27:06.045067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.065 [2024-07-14 20:27:06.045090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.065 [2024-07-14 20:27:06.049318] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.065 [2024-07-14 20:27:06.049566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.065 [2024-07-14 20:27:06.049590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.065 [2024-07-14 20:27:06.053799] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.065 [2024-07-14 20:27:06.054062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.065 [2024-07-14 20:27:06.054085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.065 [2024-07-14 20:27:06.058360] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.065 [2024-07-14 20:27:06.058624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.065 [2024-07-14 20:27:06.058647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.065 [2024-07-14 20:27:06.062828] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.065 [2024-07-14 20:27:06.063119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.065 [2024-07-14 20:27:06.063144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.065 [2024-07-14 20:27:06.067408] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.066 [2024-07-14 20:27:06.067657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.066 [2024-07-14 20:27:06.067681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.066 [2024-07-14 20:27:06.072172] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.066 [2024-07-14 20:27:06.072438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.066 [2024-07-14 20:27:06.072462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.066 [2024-07-14 20:27:06.076986] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.066 [2024-07-14 20:27:06.077266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.066 [2024-07-14 20:27:06.077291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.066 [2024-07-14 20:27:06.081895] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.066 [2024-07-14 20:27:06.082196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.066 [2024-07-14 20:27:06.082221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.066 [2024-07-14 20:27:06.087145] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.066 [2024-07-14 20:27:06.087473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.066 [2024-07-14 20:27:06.087497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.066 [2024-07-14 20:27:06.092244] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.066 [2024-07-14 20:27:06.092516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.066 [2024-07-14 20:27:06.092541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.066 [2024-07-14 20:27:06.097141] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.066 [2024-07-14 20:27:06.097465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.066 [2024-07-14 20:27:06.097491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.066 [2024-07-14 20:27:06.102101] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.066 [2024-07-14 20:27:06.102391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.066 [2024-07-14 20:27:06.102416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.066 [2024-07-14 20:27:06.106886] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.066 [2024-07-14 20:27:06.107232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.066 [2024-07-14 20:27:06.107288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.066 [2024-07-14 20:27:06.111753] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.066 [2024-07-14 20:27:06.112066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.066 [2024-07-14 20:27:06.112087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.066 [2024-07-14 20:27:06.116427] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.066 [2024-07-14 20:27:06.116688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.066 [2024-07-14 20:27:06.116712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.066 [2024-07-14 20:27:06.121010] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.066 [2024-07-14 20:27:06.121258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.066 [2024-07-14 20:27:06.121282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.066 [2024-07-14 20:27:06.125619] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.066 [2024-07-14 20:27:06.125898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.066 [2024-07-14 20:27:06.125922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.066 [2024-07-14 20:27:06.130158] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.066 [2024-07-14 20:27:06.130426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.066 [2024-07-14 20:27:06.130450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.066 [2024-07-14 20:27:06.134742] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.066 [2024-07-14 20:27:06.135049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.066 [2024-07-14 20:27:06.135076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.066 [2024-07-14 20:27:06.139418] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.066 [2024-07-14 20:27:06.139682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.066 [2024-07-14 20:27:06.139706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.066 [2024-07-14 20:27:06.144005] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.066 [2024-07-14 20:27:06.144328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.066 [2024-07-14 20:27:06.144352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.328 [2024-07-14 20:27:06.148986] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.328 [2024-07-14 20:27:06.149301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.328 [2024-07-14 20:27:06.149324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.328 [2024-07-14 20:27:06.153761] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.329 [2024-07-14 20:27:06.154105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.329 [2024-07-14 20:27:06.154130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.329 [2024-07-14 20:27:06.158395] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.329 [2024-07-14 20:27:06.158645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.329 [2024-07-14 20:27:06.158669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.329 [2024-07-14 20:27:06.162986] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.329 [2024-07-14 20:27:06.163306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.329 [2024-07-14 20:27:06.163330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.329 [2024-07-14 20:27:06.167603] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.329 [2024-07-14 20:27:06.167851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.329 [2024-07-14 20:27:06.167882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.329 [2024-07-14 20:27:06.172143] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.329 [2024-07-14 20:27:06.172393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.329 [2024-07-14 20:27:06.172417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.329 [2024-07-14 20:27:06.176667] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.329 [2024-07-14 20:27:06.176928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.329 [2024-07-14 20:27:06.176948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.329 [2024-07-14 20:27:06.181186] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.329 [2024-07-14 20:27:06.181435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.329 [2024-07-14 20:27:06.181485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.329 [2024-07-14 20:27:06.185764] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.329 [2024-07-14 20:27:06.186048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.329 [2024-07-14 20:27:06.186072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.329 [2024-07-14 20:27:06.190296] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.329 [2024-07-14 20:27:06.190556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.329 [2024-07-14 20:27:06.190580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.329 [2024-07-14 20:27:06.194888] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.329 [2024-07-14 20:27:06.195223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.329 [2024-07-14 20:27:06.195248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.329 [2024-07-14 20:27:06.200145] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.329 [2024-07-14 20:27:06.200396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.329 [2024-07-14 20:27:06.200421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.329 [2024-07-14 20:27:06.204611] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.329 [2024-07-14 20:27:06.204886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.329 [2024-07-14 20:27:06.204909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.329 [2024-07-14 20:27:06.209077] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.329 [2024-07-14 20:27:06.209327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.329 [2024-07-14 20:27:06.209351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.329 [2024-07-14 20:27:06.213547] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.329 [2024-07-14 20:27:06.213808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.329 [2024-07-14 20:27:06.213832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.329 [2024-07-14 20:27:06.218231] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.329 [2024-07-14 20:27:06.218481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.329 [2024-07-14 20:27:06.218505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.329 [2024-07-14 20:27:06.222807] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.329 [2024-07-14 20:27:06.223154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.329 [2024-07-14 20:27:06.223176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.329 [2024-07-14 20:27:06.227736] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.329 [2024-07-14 20:27:06.228027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.329 [2024-07-14 20:27:06.228046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.329 [2024-07-14 20:27:06.232257] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.329 [2024-07-14 20:27:06.232524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.329 [2024-07-14 20:27:06.232559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.329 [2024-07-14 20:27:06.236865] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.329 [2024-07-14 20:27:06.237138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.329 [2024-07-14 20:27:06.237162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.329 [2024-07-14 20:27:06.241539] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.329 [2024-07-14 20:27:06.241807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.329 [2024-07-14 20:27:06.241831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.329 [2024-07-14 20:27:06.246367] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.329 [2024-07-14 20:27:06.246636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.329 [2024-07-14 20:27:06.246660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.329 [2024-07-14 20:27:06.251164] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.329 [2024-07-14 20:27:06.251465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.329 [2024-07-14 20:27:06.251489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.329 [2024-07-14 20:27:06.255827] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.329 [2024-07-14 20:27:06.256103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.329 [2024-07-14 20:27:06.256127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.329 [2024-07-14 20:27:06.260334] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.329 [2024-07-14 20:27:06.260597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.329 [2024-07-14 20:27:06.260621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.329 [2024-07-14 20:27:06.264792] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.329 [2024-07-14 20:27:06.265055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.329 [2024-07-14 20:27:06.265079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.329 [2024-07-14 20:27:06.269322] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.329 [2024-07-14 20:27:06.269584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.329 [2024-07-14 20:27:06.269607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.329 [2024-07-14 20:27:06.273797] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.329 [2024-07-14 20:27:06.274077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.329 [2024-07-14 20:27:06.274101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.329 [2024-07-14 20:27:06.278443] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.329 [2024-07-14 20:27:06.278705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.329 [2024-07-14 20:27:06.278729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.329 [2024-07-14 20:27:06.283022] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.329 [2024-07-14 20:27:06.283298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.329 [2024-07-14 20:27:06.283323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.330 [2024-07-14 20:27:06.287577] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.330 [2024-07-14 20:27:06.287827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.330 [2024-07-14 20:27:06.287850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.330 [2024-07-14 20:27:06.292075] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.330 [2024-07-14 20:27:06.292325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.330 [2024-07-14 20:27:06.292348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.330 [2024-07-14 20:27:06.296608] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.330 [2024-07-14 20:27:06.296856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.330 [2024-07-14 20:27:06.296889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.330 [2024-07-14 20:27:06.301135] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.330 [2024-07-14 20:27:06.301404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.330 [2024-07-14 20:27:06.301429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.330 [2024-07-14 20:27:06.305634] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.330 [2024-07-14 20:27:06.305923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.330 [2024-07-14 20:27:06.305949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.330 [2024-07-14 20:27:06.310142] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.330 [2024-07-14 20:27:06.310409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.330 [2024-07-14 20:27:06.310433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.330 [2024-07-14 20:27:06.314630] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.330 [2024-07-14 20:27:06.314906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.330 [2024-07-14 20:27:06.314968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.330 [2024-07-14 20:27:06.319137] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.330 [2024-07-14 20:27:06.319425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.330 [2024-07-14 20:27:06.319449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.330 [2024-07-14 20:27:06.323635] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.330 [2024-07-14 20:27:06.323885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.330 [2024-07-14 20:27:06.323918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.330 [2024-07-14 20:27:06.328197] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.330 [2024-07-14 20:27:06.328446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.330 [2024-07-14 20:27:06.328470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.330 [2024-07-14 20:27:06.332653] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.330 [2024-07-14 20:27:06.332934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.330 [2024-07-14 20:27:06.332958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.330 [2024-07-14 20:27:06.337147] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.330 [2024-07-14 20:27:06.337409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.330 [2024-07-14 20:27:06.337433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.330 [2024-07-14 20:27:06.341807] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.330 [2024-07-14 20:27:06.342131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.330 [2024-07-14 20:27:06.342156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.330 [2024-07-14 20:27:06.347043] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.330 [2024-07-14 20:27:06.347340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.330 [2024-07-14 20:27:06.347363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.330 [2024-07-14 20:27:06.351688] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.330 [2024-07-14 20:27:06.351936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.330 [2024-07-14 20:27:06.351955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.330 [2024-07-14 20:27:06.356147] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.330 [2024-07-14 20:27:06.356411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.330 [2024-07-14 20:27:06.356436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.330 [2024-07-14 20:27:06.360634] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.330 [2024-07-14 20:27:06.360925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.330 [2024-07-14 20:27:06.360945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.330 [2024-07-14 20:27:06.365173] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.330 [2024-07-14 20:27:06.365441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.330 [2024-07-14 20:27:06.365465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.330 [2024-07-14 20:27:06.369665] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.330 [2024-07-14 20:27:06.369940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.330 [2024-07-14 20:27:06.369964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.330 [2024-07-14 20:27:06.374221] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.330 [2024-07-14 20:27:06.374483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.330 [2024-07-14 20:27:06.374507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.330 [2024-07-14 20:27:06.378889] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.330 [2024-07-14 20:27:06.379197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.330 [2024-07-14 20:27:06.379222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.330 [2024-07-14 20:27:06.383551] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.330 [2024-07-14 20:27:06.383812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.330 [2024-07-14 20:27:06.383835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.330 [2024-07-14 20:27:06.388110] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.330 [2024-07-14 20:27:06.388358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.330 [2024-07-14 20:27:06.388382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.330 [2024-07-14 20:27:06.392616] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.330 [2024-07-14 20:27:06.392892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.330 [2024-07-14 20:27:06.392915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.330 [2024-07-14 20:27:06.397167] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.330 [2024-07-14 20:27:06.397434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.330 [2024-07-14 20:27:06.397458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.330 [2024-07-14 20:27:06.401724] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.330 [2024-07-14 20:27:06.401985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.330 [2024-07-14 20:27:06.402009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.330 [2024-07-14 20:27:06.406259] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.330 [2024-07-14 20:27:06.406549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.330 [2024-07-14 20:27:06.406574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.590 [2024-07-14 20:27:06.411322] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.590 [2024-07-14 20:27:06.411587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.590 [2024-07-14 20:27:06.411610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.590 [2024-07-14 20:27:06.415899] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.590 [2024-07-14 20:27:06.416160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.590 [2024-07-14 20:27:06.416183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.590 [2024-07-14 20:27:06.420857] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.590 [2024-07-14 20:27:06.421152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.590 [2024-07-14 20:27:06.421176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.590 [2024-07-14 20:27:06.425478] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.590 [2024-07-14 20:27:06.425727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.590 [2024-07-14 20:27:06.425767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.590 [2024-07-14 20:27:06.429994] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.590 [2024-07-14 20:27:06.430244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.590 [2024-07-14 20:27:06.430268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.590 [2024-07-14 20:27:06.434558] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.590 [2024-07-14 20:27:06.434806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.590 [2024-07-14 20:27:06.434830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.590 [2024-07-14 20:27:06.439073] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.590 [2024-07-14 20:27:06.439380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.591 [2024-07-14 20:27:06.439404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.591 [2024-07-14 20:27:06.443680] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.591 [2024-07-14 20:27:06.443929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.591 [2024-07-14 20:27:06.443962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.591 [2024-07-14 20:27:06.448172] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.591 [2024-07-14 20:27:06.448453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.591 [2024-07-14 20:27:06.448478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.591 [2024-07-14 20:27:06.452719] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.591 [2024-07-14 20:27:06.453000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.591 [2024-07-14 20:27:06.453024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.591 [2024-07-14 20:27:06.457218] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.591 [2024-07-14 20:27:06.457484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.591 [2024-07-14 20:27:06.457509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.591 [2024-07-14 20:27:06.461778] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.591 [2024-07-14 20:27:06.462053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.591 [2024-07-14 20:27:06.462076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.591 [2024-07-14 20:27:06.466337] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.591 [2024-07-14 20:27:06.466599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.591 [2024-07-14 20:27:06.466623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.591 [2024-07-14 20:27:06.470768] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.591 [2024-07-14 20:27:06.471107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.591 [2024-07-14 20:27:06.471132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.591 [2024-07-14 20:27:06.475391] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.591 [2024-07-14 20:27:06.475640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.591 [2024-07-14 20:27:06.475665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.591 [2024-07-14 20:27:06.480022] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.591 [2024-07-14 20:27:06.480278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.591 [2024-07-14 20:27:06.480301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.591 [2024-07-14 20:27:06.484552] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.591 [2024-07-14 20:27:06.484813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.591 [2024-07-14 20:27:06.484832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.591 [2024-07-14 20:27:06.489085] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.591 [2024-07-14 20:27:06.489397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.591 [2024-07-14 20:27:06.489422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.591 [2024-07-14 20:27:06.493722] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.591 [2024-07-14 20:27:06.493984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.591 [2024-07-14 20:27:06.494007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.591 [2024-07-14 20:27:06.498345] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.591 [2024-07-14 20:27:06.498606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.591 [2024-07-14 20:27:06.498631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.591 [2024-07-14 20:27:06.503025] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.591 [2024-07-14 20:27:06.503313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.591 [2024-07-14 20:27:06.503337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.591 [2024-07-14 20:27:06.507635] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.591 [2024-07-14 20:27:06.507897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.591 [2024-07-14 20:27:06.507930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.591 [2024-07-14 20:27:06.512243] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.591 [2024-07-14 20:27:06.512499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.591 [2024-07-14 20:27:06.512523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.591 [2024-07-14 20:27:06.516896] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.591 [2024-07-14 20:27:06.517150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.591 [2024-07-14 20:27:06.517174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.591 [2024-07-14 20:27:06.521498] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.591 [2024-07-14 20:27:06.521753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.591 [2024-07-14 20:27:06.521778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.591 [2024-07-14 20:27:06.526172] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.591 [2024-07-14 20:27:06.526445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.591 [2024-07-14 20:27:06.526469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.591 [2024-07-14 20:27:06.530765] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.591 [2024-07-14 20:27:06.531090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.591 [2024-07-14 20:27:06.531116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.591 [2024-07-14 20:27:06.535514] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.591 [2024-07-14 20:27:06.535771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.591 [2024-07-14 20:27:06.535795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.591 [2024-07-14 20:27:06.540143] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.591 [2024-07-14 20:27:06.540411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.591 [2024-07-14 20:27:06.540434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.591 [2024-07-14 20:27:06.544963] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.591 [2024-07-14 20:27:06.545254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.591 [2024-07-14 20:27:06.545278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.591 [2024-07-14 20:27:06.549643] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.591 [2024-07-14 20:27:06.549925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.591 [2024-07-14 20:27:06.549945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.591 [2024-07-14 20:27:06.554242] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.591 [2024-07-14 20:27:06.554529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.591 [2024-07-14 20:27:06.554549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.591 [2024-07-14 20:27:06.558889] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.591 [2024-07-14 20:27:06.559199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.591 [2024-07-14 20:27:06.559224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.591 [2024-07-14 20:27:06.563585] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.591 [2024-07-14 20:27:06.563852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.591 [2024-07-14 20:27:06.563884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.591 [2024-07-14 20:27:06.568287] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.591 [2024-07-14 20:27:06.568546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.591 [2024-07-14 20:27:06.568570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.591 [2024-07-14 20:27:06.572910] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.592 [2024-07-14 20:27:06.573179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.592 [2024-07-14 20:27:06.573204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.592 [2024-07-14 20:27:06.577478] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.592 [2024-07-14 20:27:06.577735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.592 [2024-07-14 20:27:06.577759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.592 [2024-07-14 20:27:06.582136] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.592 [2024-07-14 20:27:06.582409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.592 [2024-07-14 20:27:06.582434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.592 [2024-07-14 20:27:06.586770] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.592 [2024-07-14 20:27:06.587101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.592 [2024-07-14 20:27:06.587128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.592 [2024-07-14 20:27:06.591491] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.592 [2024-07-14 20:27:06.591761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.592 [2024-07-14 20:27:06.591785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.592 [2024-07-14 20:27:06.596169] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.592 [2024-07-14 20:27:06.596435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.592 [2024-07-14 20:27:06.596459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.592 [2024-07-14 20:27:06.600787] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.592 [2024-07-14 20:27:06.601056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.592 [2024-07-14 20:27:06.601081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.592 [2024-07-14 20:27:06.605422] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.592 [2024-07-14 20:27:06.605678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.592 [2024-07-14 20:27:06.605702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.592 [2024-07-14 20:27:06.610071] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.592 [2024-07-14 20:27:06.610347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.592 [2024-07-14 20:27:06.610372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.592 [2024-07-14 20:27:06.614712] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.592 [2024-07-14 20:27:06.615023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.592 [2024-07-14 20:27:06.615049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.592 [2024-07-14 20:27:06.619474] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.592 [2024-07-14 20:27:06.619730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.592 [2024-07-14 20:27:06.619754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.618 [2024-07-14 20:27:06.624089] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.618 [2024-07-14 20:27:06.624346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.618 [2024-07-14 20:27:06.624370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.618 [2024-07-14 20:27:06.628787] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.618 [2024-07-14 20:27:06.629055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.618 [2024-07-14 20:27:06.629079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.618 [2024-07-14 20:27:06.633377] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.618 [2024-07-14 20:27:06.633655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.618 [2024-07-14 20:27:06.633680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.618 [2024-07-14 20:27:06.638160] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.618 [2024-07-14 20:27:06.638444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.618 [2024-07-14 20:27:06.638469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.618 [2024-07-14 20:27:06.642828] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.618 [2024-07-14 20:27:06.643152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.618 [2024-07-14 20:27:06.643178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.618 [2024-07-14 20:27:06.647628] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.618 [2024-07-14 20:27:06.647897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.618 [2024-07-14 20:27:06.647931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.618 [2024-07-14 20:27:06.652342] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.618 [2024-07-14 20:27:06.652599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.618 [2024-07-14 20:27:06.652624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.618 [2024-07-14 20:27:06.656978] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.618 [2024-07-14 20:27:06.657235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.618 [2024-07-14 20:27:06.657259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.618 [2024-07-14 20:27:06.661644] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.618 [2024-07-14 20:27:06.661940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.619 [2024-07-14 20:27:06.661965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.619 [2024-07-14 20:27:06.666396] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.619 [2024-07-14 20:27:06.666651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.619 [2024-07-14 20:27:06.666675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.619 [2024-07-14 20:27:06.671290] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.619 [2024-07-14 20:27:06.671563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.619 [2024-07-14 20:27:06.671588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.878 [2024-07-14 20:27:06.676366] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.878 [2024-07-14 20:27:06.676631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.878 [2024-07-14 20:27:06.676656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.878 [2024-07-14 20:27:06.681084] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.878 [2024-07-14 20:27:06.681343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.878 [2024-07-14 20:27:06.681368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.878 [2024-07-14 20:27:06.685737] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.878 [2024-07-14 20:27:06.686025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.878 [2024-07-14 20:27:06.686050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.878 [2024-07-14 20:27:06.690445] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.878 [2024-07-14 20:27:06.690700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.878 [2024-07-14 20:27:06.690725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.878 [2024-07-14 20:27:06.695170] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.878 [2024-07-14 20:27:06.695478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.878 [2024-07-14 20:27:06.695502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.878 [2024-07-14 20:27:06.699876] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.878 [2024-07-14 20:27:06.700141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.878 [2024-07-14 20:27:06.700165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.878 [2024-07-14 20:27:06.704514] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.878 [2024-07-14 20:27:06.704770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.878 [2024-07-14 20:27:06.704795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.878 [2024-07-14 20:27:06.709719] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.878 [2024-07-14 20:27:06.710013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.878 [2024-07-14 20:27:06.710040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.878 [2024-07-14 20:27:06.714759] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.878 [2024-07-14 20:27:06.715070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.878 [2024-07-14 20:27:06.715098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.878 [2024-07-14 20:27:06.719558] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.878 [2024-07-14 20:27:06.719825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.878 [2024-07-14 20:27:06.719850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.878 [2024-07-14 20:27:06.724231] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.878 [2024-07-14 20:27:06.724508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.878 [2024-07-14 20:27:06.724534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.878 [2024-07-14 20:27:06.728906] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.878 [2024-07-14 20:27:06.729170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.878 [2024-07-14 20:27:06.729195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.878 [2024-07-14 20:27:06.733537] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.878 [2024-07-14 20:27:06.733814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.878 [2024-07-14 20:27:06.733839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.878 [2024-07-14 20:27:06.738202] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.878 [2024-07-14 20:27:06.738460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.878 [2024-07-14 20:27:06.738480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.878 [2024-07-14 20:27:06.742800] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.878 [2024-07-14 20:27:06.743152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.878 [2024-07-14 20:27:06.743201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.878 [2024-07-14 20:27:06.747732] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.878 [2024-07-14 20:27:06.748020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.878 [2024-07-14 20:27:06.748046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.878 [2024-07-14 20:27:06.752377] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.878 [2024-07-14 20:27:06.752637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.878 [2024-07-14 20:27:06.752663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.878 [2024-07-14 20:27:06.757068] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.878 [2024-07-14 20:27:06.757359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.878 [2024-07-14 20:27:06.757380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.878 [2024-07-14 20:27:06.761749] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.878 [2024-07-14 20:27:06.762024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.878 [2024-07-14 20:27:06.762049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.878 [2024-07-14 20:27:06.766516] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.878 [2024-07-14 20:27:06.766778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.878 [2024-07-14 20:27:06.766803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.878 [2024-07-14 20:27:06.771231] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.878 [2024-07-14 20:27:06.771569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.878 [2024-07-14 20:27:06.771595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.878 [2024-07-14 20:27:06.776016] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.878 [2024-07-14 20:27:06.776286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.878 [2024-07-14 20:27:06.776311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.878 [2024-07-14 20:27:06.780555] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.878 [2024-07-14 20:27:06.780823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.878 [2024-07-14 20:27:06.780848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.878 [2024-07-14 20:27:06.785276] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.879 [2024-07-14 20:27:06.785544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.879 [2024-07-14 20:27:06.785569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.879 [2024-07-14 20:27:06.789901] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.879 [2024-07-14 20:27:06.790158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.879 [2024-07-14 20:27:06.790182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.879 [2024-07-14 20:27:06.794480] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.879 [2024-07-14 20:27:06.794750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.879 [2024-07-14 20:27:06.794774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.879 [2024-07-14 20:27:06.799205] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.879 [2024-07-14 20:27:06.799495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.879 [2024-07-14 20:27:06.799520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.879 [2024-07-14 20:27:06.803841] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.879 [2024-07-14 20:27:06.804122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.879 [2024-07-14 20:27:06.804146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.879 [2024-07-14 20:27:06.808543] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.879 [2024-07-14 20:27:06.808813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.879 [2024-07-14 20:27:06.808838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.879 [2024-07-14 20:27:06.813121] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.879 [2024-07-14 20:27:06.813375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.879 [2024-07-14 20:27:06.813400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.879 [2024-07-14 20:27:06.817747] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.879 [2024-07-14 20:27:06.818028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.879 [2024-07-14 20:27:06.818054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.879 [2024-07-14 20:27:06.822367] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.879 [2024-07-14 20:27:06.822623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.879 [2024-07-14 20:27:06.822647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.879 [2024-07-14 20:27:06.826995] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.879 [2024-07-14 20:27:06.827285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.879 [2024-07-14 20:27:06.827324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.879 [2024-07-14 20:27:06.831626] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.879 [2024-07-14 20:27:06.831882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.879 [2024-07-14 20:27:06.831916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.879 [2024-07-14 20:27:06.836337] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.879 [2024-07-14 20:27:06.836594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.879 [2024-07-14 20:27:06.836618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.879 [2024-07-14 20:27:06.840953] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.879 [2024-07-14 20:27:06.841209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.879 [2024-07-14 20:27:06.841233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.879 [2024-07-14 20:27:06.845472] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.879 [2024-07-14 20:27:06.845728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.879 [2024-07-14 20:27:06.845753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.879 [2024-07-14 20:27:06.850082] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.879 [2024-07-14 20:27:06.850339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.879 [2024-07-14 20:27:06.850363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.879 [2024-07-14 20:27:06.854615] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.879 [2024-07-14 20:27:06.854900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.879 [2024-07-14 20:27:06.854948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.879 [2024-07-14 20:27:06.859241] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.879 [2024-07-14 20:27:06.859530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.879 [2024-07-14 20:27:06.859555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.879 [2024-07-14 20:27:06.863854] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.879 [2024-07-14 20:27:06.864124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.879 [2024-07-14 20:27:06.864148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.879 [2024-07-14 20:27:06.868452] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.879 [2024-07-14 20:27:06.868710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.879 [2024-07-14 20:27:06.868734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.879 [2024-07-14 20:27:06.872997] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.879 [2024-07-14 20:27:06.873253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.879 [2024-07-14 20:27:06.873277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.879 [2024-07-14 20:27:06.877635] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.879 [2024-07-14 20:27:06.877899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.879 [2024-07-14 20:27:06.877919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.879 [2024-07-14 20:27:06.882227] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.879 [2024-07-14 20:27:06.882500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.879 [2024-07-14 20:27:06.882519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.879 [2024-07-14 20:27:06.886835] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.879 [2024-07-14 20:27:06.887147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.879 [2024-07-14 20:27:06.887172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.879 [2024-07-14 20:27:06.891455] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.879 [2024-07-14 20:27:06.891711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.879 [2024-07-14 20:27:06.891735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.879 [2024-07-14 20:27:06.895994] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.879 [2024-07-14 20:27:06.896251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.879 [2024-07-14 20:27:06.896275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.879 [2024-07-14 20:27:06.900534] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.879 [2024-07-14 20:27:06.900791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.879 [2024-07-14 20:27:06.900815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.879 [2024-07-14 20:27:06.905112] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.879 [2024-07-14 20:27:06.905367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.879 [2024-07-14 20:27:06.905391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.879 [2024-07-14 20:27:06.909714] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.879 [2024-07-14 20:27:06.909982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.879 [2024-07-14 20:27:06.910007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.879 [2024-07-14 20:27:06.914318] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.879 [2024-07-14 20:27:06.914576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.879 [2024-07-14 20:27:06.914601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.879 [2024-07-14 20:27:06.918851] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.880 [2024-07-14 20:27:06.919174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.880 [2024-07-14 20:27:06.919200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.880 [2024-07-14 20:27:06.923608] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.880 [2024-07-14 20:27:06.923866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.880 [2024-07-14 20:27:06.923900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.880 [2024-07-14 20:27:06.928176] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.880 [2024-07-14 20:27:06.928433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.880 [2024-07-14 20:27:06.928458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.880 [2024-07-14 20:27:06.932822] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.880 [2024-07-14 20:27:06.933112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.880 [2024-07-14 20:27:06.933132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.880 [2024-07-14 20:27:06.937422] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.880 [2024-07-14 20:27:06.937679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.880 [2024-07-14 20:27:06.937704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.880 [2024-07-14 20:27:06.942006] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.880 [2024-07-14 20:27:06.942262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.880 [2024-07-14 20:27:06.942286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.880 [2024-07-14 20:27:06.946565] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.880 [2024-07-14 20:27:06.946821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.880 [2024-07-14 20:27:06.946845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.880 [2024-07-14 20:27:06.951203] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.880 [2024-07-14 20:27:06.951509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.880 [2024-07-14 20:27:06.951533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.880 [2024-07-14 20:27:06.955822] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.880 [2024-07-14 20:27:06.956103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.880 [2024-07-14 20:27:06.956128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.880 [2024-07-14 20:27:06.960790] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:17.880 [2024-07-14 20:27:06.961059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.880 [2024-07-14 20:27:06.961083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.140 [2024-07-14 20:27:06.965615] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:18.140 [2024-07-14 20:27:06.965882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.140 [2024-07-14 20:27:06.965906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.140 [2024-07-14 20:27:06.970410] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:18.140 [2024-07-14 20:27:06.970679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.140 [2024-07-14 20:27:06.970703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.140 [2024-07-14 20:27:06.975126] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:18.140 [2024-07-14 20:27:06.975433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.140 [2024-07-14 20:27:06.975457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.140 [2024-07-14 20:27:06.979795] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:18.140 [2024-07-14 20:27:06.980069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.140 [2024-07-14 20:27:06.980094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.140 [2024-07-14 20:27:06.984489] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:18.140 [2024-07-14 20:27:06.984773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.140 [2024-07-14 20:27:06.984798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.140 [2024-07-14 20:27:06.989185] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:18.140 [2024-07-14 20:27:06.989458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.140 [2024-07-14 20:27:06.989477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.140 [2024-07-14 20:27:06.993759] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:18.140 [2024-07-14 20:27:06.994044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.140 [2024-07-14 20:27:06.994064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.140 [2024-07-14 20:27:06.998423] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:18.140 [2024-07-14 20:27:06.998691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.140 [2024-07-14 20:27:06.998716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.140 [2024-07-14 20:27:07.003113] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:18.140 [2024-07-14 20:27:07.003423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.140 [2024-07-14 20:27:07.003447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.140 [2024-07-14 20:27:07.007778] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:18.140 [2024-07-14 20:27:07.008047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.140 [2024-07-14 20:27:07.008071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.140 [2024-07-14 20:27:07.012387] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:18.140 [2024-07-14 20:27:07.012643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.140 [2024-07-14 20:27:07.012667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.140 [2024-07-14 20:27:07.017033] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:18.140 [2024-07-14 20:27:07.017311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.140 [2024-07-14 20:27:07.017335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.140 [2024-07-14 20:27:07.021688] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:18.140 [2024-07-14 20:27:07.021956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.140 [2024-07-14 20:27:07.021981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.140 [2024-07-14 20:27:07.026428] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:18.140 [2024-07-14 20:27:07.026700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.140 [2024-07-14 20:27:07.026724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.140 [2024-07-14 20:27:07.031154] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:18.140 [2024-07-14 20:27:07.031460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.140 [2024-07-14 20:27:07.031484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.140 [2024-07-14 20:27:07.035828] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:18.140 [2024-07-14 20:27:07.036110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.140 [2024-07-14 20:27:07.036134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.140 [2024-07-14 20:27:07.040497] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:18.140 [2024-07-14 20:27:07.040766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.140 [2024-07-14 20:27:07.040791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.140 [2024-07-14 20:27:07.045199] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:18.140 [2024-07-14 20:27:07.045471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.140 [2024-07-14 20:27:07.045495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.140 [2024-07-14 20:27:07.049812] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:18.140 [2024-07-14 20:27:07.050094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.140 [2024-07-14 20:27:07.050119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.140 [2024-07-14 20:27:07.054451] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:18.140 [2024-07-14 20:27:07.054707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.140 [2024-07-14 20:27:07.054731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.140 [2024-07-14 20:27:07.059130] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:18.141 [2024-07-14 20:27:07.059423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.141 [2024-07-14 20:27:07.059447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.141 [2024-07-14 20:27:07.063796] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:18.141 [2024-07-14 20:27:07.064064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.141 [2024-07-14 20:27:07.064088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.141 [2024-07-14 20:27:07.068416] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:18.141 [2024-07-14 20:27:07.068653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.141 [2024-07-14 20:27:07.068676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.141 [2024-07-14 20:27:07.073010] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:18.141 [2024-07-14 20:27:07.073301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.141 [2024-07-14 20:27:07.073326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.141 [2024-07-14 20:27:07.077725] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:18.141 [2024-07-14 20:27:07.078003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.141 [2024-07-14 20:27:07.078027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.141 [2024-07-14 20:27:07.082310] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:18.141 [2024-07-14 20:27:07.082564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.141 [2024-07-14 20:27:07.082589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.141 [2024-07-14 20:27:07.087041] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:18.141 [2024-07-14 20:27:07.087386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.141 [2024-07-14 20:27:07.087414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.141 [2024-07-14 20:27:07.092091] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:18.141 [2024-07-14 20:27:07.092390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.141 [2024-07-14 20:27:07.092415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.141 [2024-07-14 20:27:07.097113] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:18.141 [2024-07-14 20:27:07.097395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.141 [2024-07-14 20:27:07.097421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.141 [2024-07-14 20:27:07.101942] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:18.141 [2024-07-14 20:27:07.102227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.141 [2024-07-14 20:27:07.102252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.141 [2024-07-14 20:27:07.106902] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:18.141 [2024-07-14 20:27:07.107235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.141 [2024-07-14 20:27:07.107257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.141 [2024-07-14 20:27:07.112034] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:18.141 [2024-07-14 20:27:07.112339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.141 [2024-07-14 20:27:07.112365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.141 [2024-07-14 20:27:07.117115] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:18.141 [2024-07-14 20:27:07.117406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.141 [2024-07-14 20:27:07.117430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.141 [2024-07-14 20:27:07.121938] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:18.141 [2024-07-14 20:27:07.122194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.141 [2024-07-14 20:27:07.122218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.141 [2024-07-14 20:27:07.126667] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:18.141 [2024-07-14 20:27:07.126976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.141 [2024-07-14 20:27:07.127003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.141 [2024-07-14 20:27:07.131522] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:18.141 [2024-07-14 20:27:07.131779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.141 [2024-07-14 20:27:07.131804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.141 [2024-07-14 20:27:07.136290] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:18.141 [2024-07-14 20:27:07.136560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.141 [2024-07-14 20:27:07.136585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.141 [2024-07-14 20:27:07.140940] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:18.141 [2024-07-14 20:27:07.141197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.141 [2024-07-14 20:27:07.141221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.141 [2024-07-14 20:27:07.145612] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:18.141 [2024-07-14 20:27:07.145881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.141 [2024-07-14 20:27:07.145905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.141 [2024-07-14 20:27:07.150711] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:18.141 [2024-07-14 20:27:07.151038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.141 [2024-07-14 20:27:07.151059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.141 [2024-07-14 20:27:07.155700] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:18.141 [2024-07-14 20:27:07.155988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.141 [2024-07-14 20:27:07.156040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.141 [2024-07-14 20:27:07.160407] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:18.141 [2024-07-14 20:27:07.160664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.141 [2024-07-14 20:27:07.160688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.141 [2024-07-14 20:27:07.165137] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:18.141 [2024-07-14 20:27:07.165400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.141 [2024-07-14 20:27:07.165425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.141 [2024-07-14 20:27:07.169799] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:18.141 [2024-07-14 20:27:07.170067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.141 [2024-07-14 20:27:07.170092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.141 [2024-07-14 20:27:07.174440] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:18.141 [2024-07-14 20:27:07.174696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.141 [2024-07-14 20:27:07.174720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.141 [2024-07-14 20:27:07.179291] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:18.141 [2024-07-14 20:27:07.179578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.141 [2024-07-14 20:27:07.179602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.141 [2024-07-14 20:27:07.184110] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:18.141 [2024-07-14 20:27:07.184397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.141 [2024-07-14 20:27:07.184422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.141 [2024-07-14 20:27:07.188949] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:18.141 [2024-07-14 20:27:07.189219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.141 [2024-07-14 20:27:07.189242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.141 [2024-07-14 20:27:07.193607] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:18.141 [2024-07-14 20:27:07.193863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.142 [2024-07-14 20:27:07.193909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.142 [2024-07-14 20:27:07.198260] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:18.142 [2024-07-14 20:27:07.198515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.142 [2024-07-14 20:27:07.198540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.142 [2024-07-14 20:27:07.202902] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:18.142 [2024-07-14 20:27:07.203198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.142 [2024-07-14 20:27:07.203224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.142 [2024-07-14 20:27:07.207618] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1da5cf0) with pdu=0x2000190fef90 00:27:18.142 [2024-07-14 20:27:07.207890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.142 [2024-07-14 20:27:07.207933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.142 00:27:18.142 Latency(us) 00:27:18.142 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:18.142 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:18.142 nvme0n1 : 2.00 6666.94 833.37 0.00 0.00 2394.88 1735.21 12094.37 00:27:18.142 =================================================================================================================== 00:27:18.142 Total : 6666.94 833.37 0.00 0.00 2394.88 1735.21 12094.37 00:27:18.142 0 00:27:18.400 20:27:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:18.400 20:27:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:18.400 20:27:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:18.400 20:27:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:18.400 | .driver_specific 00:27:18.400 | .nvme_error 00:27:18.400 | .status_code 00:27:18.400 | .command_transient_transport_error' 00:27:18.659 20:27:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 430 > 0 )) 00:27:18.659 20:27:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 112435 00:27:18.659 20:27:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 112435 ']' 00:27:18.659 20:27:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 112435 00:27:18.659 20:27:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:27:18.659 20:27:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:18.659 20:27:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 112435 00:27:18.659 20:27:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:18.659 20:27:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:18.659 killing process with pid 112435 00:27:18.659 20:27:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 112435' 00:27:18.659 Received shutdown signal, test time was about 2.000000 seconds 00:27:18.659 00:27:18.659 Latency(us) 00:27:18.659 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:18.659 =================================================================================================================== 00:27:18.659 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:18.659 20:27:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 112435 00:27:18.659 20:27:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 112435 00:27:18.918 20:27:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 112129 00:27:18.918 20:27:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 112129 ']' 00:27:18.918 20:27:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 112129 00:27:18.918 20:27:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:27:18.918 20:27:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:18.918 20:27:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 112129 00:27:18.918 20:27:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:18.918 20:27:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:18.918 20:27:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 112129' 00:27:18.918 killing process with pid 112129 00:27:18.918 20:27:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 112129 00:27:18.918 20:27:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 112129 00:27:19.177 00:27:19.177 real 0m18.550s 00:27:19.177 user 0m34.968s 00:27:19.177 sys 0m4.861s 00:27:19.177 20:27:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:19.177 20:27:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:19.177 ************************************ 00:27:19.177 END TEST nvmf_digest_error 00:27:19.177 ************************************ 00:27:19.177 20:27:08 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:27:19.177 20:27:08 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:27:19.177 20:27:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:19.177 20:27:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:27:19.177 20:27:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:19.177 20:27:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:27:19.177 20:27:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:19.177 20:27:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:19.177 rmmod nvme_tcp 00:27:19.177 rmmod nvme_fabrics 00:27:19.177 rmmod nvme_keyring 00:27:19.436 20:27:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:19.436 20:27:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:27:19.436 20:27:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:27:19.436 20:27:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 112129 ']' 00:27:19.436 20:27:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 112129 00:27:19.436 20:27:08 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@946 -- # '[' -z 112129 ']' 00:27:19.436 20:27:08 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@950 -- # kill -0 112129 00:27:19.436 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (112129) - No such process 00:27:19.436 Process with pid 112129 is not found 00:27:19.436 20:27:08 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@973 -- # echo 'Process with pid 112129 is not found' 00:27:19.436 20:27:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:19.436 20:27:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:19.436 20:27:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:19.436 20:27:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:19.436 20:27:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:19.436 20:27:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:19.436 20:27:08 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:19.436 20:27:08 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:19.436 20:27:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:27:19.436 00:27:19.436 real 0m38.098s 00:27:19.436 user 1m10.357s 00:27:19.436 sys 0m10.078s 00:27:19.436 20:27:08 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:19.436 ************************************ 00:27:19.436 END TEST nvmf_digest 00:27:19.436 ************************************ 00:27:19.436 20:27:08 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:19.436 20:27:08 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 1 -eq 1 ]] 00:27:19.436 20:27:08 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ tcp == \t\c\p ]] 00:27:19.436 20:27:08 nvmf_tcp -- nvmf/nvmf.sh@113 -- # run_test nvmf_mdns_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:27:19.436 20:27:08 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:27:19.436 20:27:08 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:19.436 20:27:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:19.436 ************************************ 00:27:19.436 START TEST nvmf_mdns_discovery 00:27:19.436 ************************************ 00:27:19.436 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:27:19.436 * Looking for test storage... 00:27:19.436 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:27:19.436 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:19.436 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # uname -s 00:27:19.436 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:19.436 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:19.436 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:19.436 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:19.436 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:19.436 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:19.436 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:19.436 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:19.436 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:19.436 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:19.436 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:27:19.436 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:27:19.437 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:19.437 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:19.437 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:19.437 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:19.437 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:19.437 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:19.437 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:19.437 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:19.437 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:19.437 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:19.437 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:19.437 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@5 -- # export PATH 00:27:19.437 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:19.437 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@47 -- # : 0 00:27:19.437 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:19.437 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:19.437 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:19.437 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:19.437 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:19.437 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:19.437 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:19.437 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:19.437 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@13 -- # DISCOVERY_FILTER=address 00:27:19.437 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@14 -- # DISCOVERY_PORT=8009 00:27:19.437 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:27:19.437 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@18 -- # NQN=nqn.2016-06.io.spdk:cnode 00:27:19.437 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@19 -- # NQN2=nqn.2016-06.io.spdk:cnode2 00:27:19.437 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@21 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:27:19.437 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@22 -- # HOST_SOCK=/tmp/host.sock 00:27:19.437 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@24 -- # nvmftestinit 00:27:19.437 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:19.437 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:19.437 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:19.437 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:19.437 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:19.437 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:19.437 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:19.437 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:19.437 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:27:19.437 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:27:19.437 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:27:19.437 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:27:19.437 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:27:19.437 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:27:19.437 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:19.437 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:19.437 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:27:19.437 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:27:19.437 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:19.437 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:19.437 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:19.437 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:19.437 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:19.437 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:19.437 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:19.437 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:19.437 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:27:19.437 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:27:19.437 Cannot find device "nvmf_tgt_br" 00:27:19.437 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@155 -- # true 00:27:19.437 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:27:19.696 Cannot find device "nvmf_tgt_br2" 00:27:19.696 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@156 -- # true 00:27:19.696 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:27:19.696 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:27:19.696 Cannot find device "nvmf_tgt_br" 00:27:19.696 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@158 -- # true 00:27:19.696 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:27:19.696 Cannot find device "nvmf_tgt_br2" 00:27:19.696 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@159 -- # true 00:27:19.696 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:27:19.696 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:27:19.696 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:19.696 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:19.696 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # true 00:27:19.696 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:19.696 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:19.696 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # true 00:27:19.696 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:27:19.696 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:19.696 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:19.696 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:19.696 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:19.696 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:19.696 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:19.696 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:27:19.696 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:27:19.696 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:27:19.696 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:27:19.696 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:27:19.696 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:27:19.696 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:19.696 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:19.696 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:19.696 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:27:19.696 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:27:19.696 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:27:19.696 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:19.696 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:19.954 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:19.954 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:19.954 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:27:19.954 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:19.954 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.094 ms 00:27:19.954 00:27:19.954 --- 10.0.0.2 ping statistics --- 00:27:19.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:19.955 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:27:19.955 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:27:19.955 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:19.955 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:27:19.955 00:27:19.955 --- 10.0.0.3 ping statistics --- 00:27:19.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:19.955 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:27:19.955 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:19.955 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:19.955 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.052 ms 00:27:19.955 00:27:19.955 --- 10.0.0.1 ping statistics --- 00:27:19.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:19.955 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:27:19.955 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:19.955 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@433 -- # return 0 00:27:19.955 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:19.955 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:19.955 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:19.955 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:19.955 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:19.955 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:19.955 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:19.955 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@29 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:27:19.955 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:19.955 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:19.955 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:19.955 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@481 -- # nvmfpid=112728 00:27:19.955 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@482 -- # waitforlisten 112728 00:27:19.955 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@827 -- # '[' -z 112728 ']' 00:27:19.955 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:27:19.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:19.955 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:19.955 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:19.955 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:19.955 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:19.955 20:27:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:19.955 [2024-07-14 20:27:08.871701] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:27:19.955 [2024-07-14 20:27:08.871767] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:19.955 [2024-07-14 20:27:09.008488] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:20.213 [2024-07-14 20:27:09.115342] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:20.213 [2024-07-14 20:27:09.115733] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:20.213 [2024-07-14 20:27:09.115839] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:20.213 [2024-07-14 20:27:09.115970] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:20.213 [2024-07-14 20:27:09.116058] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:20.213 [2024-07-14 20:27:09.116181] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:21.149 20:27:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:21.149 20:27:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@860 -- # return 0 00:27:21.149 20:27:09 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:21.149 20:27:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:21.149 20:27:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:21.149 20:27:09 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:21.149 20:27:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@31 -- # rpc_cmd nvmf_set_config --discovery-filter=address 00:27:21.149 20:27:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.149 20:27:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:21.149 20:27:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.149 20:27:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@32 -- # rpc_cmd framework_start_init 00:27:21.149 20:27:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.149 20:27:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:21.149 20:27:10 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.149 20:27:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@33 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:21.149 20:27:10 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.149 20:27:10 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:21.149 [2024-07-14 20:27:10.078459] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:21.149 20:27:10 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.149 20:27:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:27:21.149 20:27:10 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.149 20:27:10 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:21.149 [2024-07-14 20:27:10.086655] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:27:21.149 20:27:10 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.149 20:27:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@36 -- # rpc_cmd bdev_null_create null0 1000 512 00:27:21.149 20:27:10 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.149 20:27:10 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:21.149 null0 00:27:21.149 20:27:10 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.149 20:27:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@37 -- # rpc_cmd bdev_null_create null1 1000 512 00:27:21.149 20:27:10 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.149 20:27:10 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:21.149 null1 00:27:21.149 20:27:10 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.149 20:27:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@38 -- # rpc_cmd bdev_null_create null2 1000 512 00:27:21.149 20:27:10 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.149 20:27:10 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:21.149 null2 00:27:21.149 20:27:10 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.149 20:27:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@39 -- # rpc_cmd bdev_null_create null3 1000 512 00:27:21.149 20:27:10 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.149 20:27:10 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:21.149 null3 00:27:21.149 20:27:10 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.149 20:27:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@40 -- # rpc_cmd bdev_wait_for_examine 00:27:21.149 20:27:10 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.149 20:27:10 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:21.149 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:27:21.149 20:27:10 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.149 20:27:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@48 -- # hostpid=112775 00:27:21.149 20:27:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@49 -- # waitforlisten 112775 /tmp/host.sock 00:27:21.149 20:27:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:27:21.149 20:27:10 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@827 -- # '[' -z 112775 ']' 00:27:21.150 20:27:10 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:27:21.150 20:27:10 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:21.150 20:27:10 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:27:21.150 20:27:10 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:21.150 20:27:10 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:21.150 [2024-07-14 20:27:10.197926] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:27:21.150 [2024-07-14 20:27:10.198266] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112775 ] 00:27:21.408 [2024-07-14 20:27:10.342372] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:21.408 [2024-07-14 20:27:10.442491] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:22.346 20:27:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:22.346 20:27:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@860 -- # return 0 00:27:22.346 20:27:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@51 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;exit 1' SIGINT SIGTERM 00:27:22.346 20:27:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@52 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;nvmftestfini;kill $hostpid;kill $avahipid;' EXIT 00:27:22.346 20:27:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@56 -- # avahi-daemon --kill 00:27:22.346 20:27:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@58 -- # avahipid=112808 00:27:22.346 20:27:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@59 -- # sleep 1 00:27:22.346 20:27:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # ip netns exec nvmf_tgt_ns_spdk avahi-daemon -f /dev/fd/63 00:27:22.346 20:27:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # echo -e '[server]\nallow-interfaces=nvmf_tgt_if,nvmf_tgt_if2\nuse-ipv4=yes\nuse-ipv6=no' 00:27:22.346 Process 983 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid) 00:27:22.346 Found user 'avahi' (UID 70) and group 'avahi' (GID 70). 00:27:22.346 Successfully dropped root privileges. 00:27:22.346 avahi-daemon 0.8 starting up. 00:27:22.346 WARNING: No NSS support for mDNS detected, consider installing nss-mdns! 00:27:23.281 Successfully called chroot(). 00:27:23.281 Successfully dropped remaining capabilities. 00:27:23.281 No service file found in /etc/avahi/services. 00:27:23.281 Joining mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:27:23.281 New relevant interface nvmf_tgt_if2.IPv4 for mDNS. 00:27:23.281 Joining mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:27:23.281 New relevant interface nvmf_tgt_if.IPv4 for mDNS. 00:27:23.281 Network interface enumeration completed. 00:27:23.281 Registering new address record for fe80::587a:63ff:fef9:f6a7 on nvmf_tgt_if2.*. 00:27:23.281 Registering new address record for 10.0.0.3 on nvmf_tgt_if2.IPv4. 00:27:23.281 Registering new address record for fe80::e073:5fff:fecc:6446 on nvmf_tgt_if.*. 00:27:23.281 Registering new address record for 10.0.0.2 on nvmf_tgt_if.IPv4. 00:27:23.281 Server startup complete. Host name is fedora38-cloud-1716830599-074-updated-1705279005.local. Local service cookie is 2555478827. 00:27:23.281 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@61 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:27:23.281 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.281 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:23.281 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.281 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@62 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:27:23.281 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.281 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:23.281 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.281 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # notify_id=0 00:27:23.281 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # get_subsystem_names 00:27:23.281 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:27:23.281 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:23.281 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.281 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:27:23.281 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:23.281 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:27:23.281 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.281 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # [[ '' == '' ]] 00:27:23.281 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # get_bdev_list 00:27:23.281 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:23.281 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:27:23.281 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:27:23.282 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.282 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:23.282 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:27:23.282 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.541 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # [[ '' == '' ]] 00:27:23.541 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:27:23.541 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.541 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:23.541 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.541 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # get_subsystem_names 00:27:23.541 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:23.541 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.541 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:23.541 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:27:23.541 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:27:23.541 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:27:23.541 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.541 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ '' == '' ]] 00:27:23.541 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # get_bdev_list 00:27:23.541 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:23.541 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.541 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:23.541 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:27:23.541 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:27:23.541 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:27:23.541 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.541 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # [[ '' == '' ]] 00:27:23.541 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:27:23.541 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.541 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:23.541 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.541 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@100 -- # get_subsystem_names 00:27:23.541 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:23.541 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.541 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:23.541 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:27:23.541 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:27:23.541 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:27:23.541 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.541 [2024-07-14 20:27:12.567313] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:27:23.541 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@100 -- # [[ '' == '' ]] 00:27:23.541 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@101 -- # get_bdev_list 00:27:23.541 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:23.541 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:27:23.541 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.541 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:23.541 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:27:23.541 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:27:23.541 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.799 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@101 -- # [[ '' == '' ]] 00:27:23.799 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@105 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:23.799 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.799 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:23.799 [2024-07-14 20:27:12.663182] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:23.799 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.800 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@109 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:27:23.800 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.800 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:23.800 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.800 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@112 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20 00:27:23.800 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.800 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:23.800 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.800 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@113 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null2 00:27:23.800 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.800 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:23.800 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.800 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode20 nqn.2021-12.io.spdk:test 00:27:23.800 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.800 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:23.800 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.800 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@119 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:27:23.800 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.800 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:23.800 [2024-07-14 20:27:12.703014] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:27:23.800 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.800 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@121 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:27:23.800 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.800 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:23.800 [2024-07-14 20:27:12.710997] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:23.800 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.800 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@124 -- # rpc_cmd nvmf_publish_mdns_prr 00:27:23.800 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.800 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:23.800 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.800 20:27:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@125 -- # sleep 5 00:27:24.734 [2024-07-14 20:27:13.467331] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:27:24.992 [2024-07-14 20:27:14.067316] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:27:24.992 [2024-07-14 20:27:14.067365] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:27:24.992 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" "nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:27:24.992 cookie is 0 00:27:24.992 is_local: 1 00:27:24.992 our_own: 0 00:27:24.992 wide_area: 0 00:27:24.992 multicast: 1 00:27:24.992 cached: 1 00:27:25.250 [2024-07-14 20:27:14.167282] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:27:25.250 [2024-07-14 20:27:14.167308] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:27:25.250 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:27:25.250 cookie is 0 00:27:25.250 is_local: 1 00:27:25.250 our_own: 0 00:27:25.250 wide_area: 0 00:27:25.250 multicast: 1 00:27:25.250 cached: 1 00:27:25.250 [2024-07-14 20:27:14.167328] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 00:27:25.250 [2024-07-14 20:27:14.267290] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:27:25.250 [2024-07-14 20:27:14.267317] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:27:25.250 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" "nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:27:25.250 cookie is 0 00:27:25.250 is_local: 1 00:27:25.250 our_own: 0 00:27:25.250 wide_area: 0 00:27:25.250 multicast: 1 00:27:25.250 cached: 1 00:27:25.508 [2024-07-14 20:27:14.367295] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:27:25.508 [2024-07-14 20:27:14.367315] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:27:25.508 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:27:25.508 cookie is 0 00:27:25.508 is_local: 1 00:27:25.508 our_own: 0 00:27:25.508 wide_area: 0 00:27:25.508 multicast: 1 00:27:25.508 cached: 1 00:27:25.508 [2024-07-14 20:27:14.367325] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.2 trid->trsvcid: 8009 00:27:26.072 [2024-07-14 20:27:15.075069] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:27:26.072 [2024-07-14 20:27:15.075092] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:27:26.072 [2024-07-14 20:27:15.075109] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:27:26.330 [2024-07-14 20:27:15.161175] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 new subsystem mdns0_nvme0 00:27:26.330 [2024-07-14 20:27:15.217611] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:27:26.330 [2024-07-14 20:27:15.217638] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:27:26.330 [2024-07-14 20:27:15.274600] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:26.330 [2024-07-14 20:27:15.274621] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:26.330 [2024-07-14 20:27:15.274635] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:26.330 [2024-07-14 20:27:15.360701] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem mdns1_nvme0 00:27:26.588 [2024-07-14 20:27:15.415357] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:27:26.588 [2024-07-14 20:27:15.415383] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:29.117 20:27:17 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@127 -- # get_mdns_discovery_svcs 00:27:29.117 20:27:17 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:27:29.117 20:27:17 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:27:29.117 20:27:17 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.117 20:27:17 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:29.117 20:27:17 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:27:29.117 20:27:17 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:27:29.117 20:27:17 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.117 20:27:17 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@127 -- # [[ mdns == \m\d\n\s ]] 00:27:29.117 20:27:17 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # get_discovery_ctrlrs 00:27:29.117 20:27:17 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:29.117 20:27:17 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:27:29.117 20:27:17 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.117 20:27:17 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:27:29.117 20:27:17 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:29.117 20:27:17 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:27:29.117 20:27:17 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.117 20:27:17 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:27:29.117 20:27:17 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # get_subsystem_names 00:27:29.117 20:27:17 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:29.117 20:27:17 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.117 20:27:17 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:29.117 20:27:17 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:27:29.117 20:27:17 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:27:29.117 20:27:17 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:27:29.117 20:27:17 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.117 20:27:17 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:27:29.117 20:27:17 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@130 -- # get_bdev_list 00:27:29.117 20:27:17 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:29.117 20:27:17 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.117 20:27:17 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:29.117 20:27:17 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:27:29.117 20:27:17 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:27:29.117 20:27:17 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:27:29.117 20:27:17 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.117 20:27:17 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@130 -- # [[ mdns0_nvme0n1 mdns1_nvme0n1 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\1 ]] 00:27:29.118 20:27:17 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@131 -- # get_subsystem_paths mdns0_nvme0 00:27:29.118 20:27:17 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:27:29.118 20:27:17 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.118 20:27:17 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:29.118 20:27:17 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:27:29.118 20:27:17 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:29.118 20:27:17 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:27:29.118 20:27:17 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.118 20:27:18 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@131 -- # [[ 4420 == \4\4\2\0 ]] 00:27:29.118 20:27:18 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@132 -- # get_subsystem_paths mdns1_nvme0 00:27:29.118 20:27:18 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:27:29.118 20:27:18 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:29.118 20:27:18 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:27:29.118 20:27:18 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.118 20:27:18 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:29.118 20:27:18 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:27:29.118 20:27:18 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.118 20:27:18 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@132 -- # [[ 4420 == \4\4\2\0 ]] 00:27:29.118 20:27:18 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@133 -- # get_notification_count 00:27:29.118 20:27:18 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:27:29.118 20:27:18 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.118 20:27:18 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:29.118 20:27:18 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:27:29.118 20:27:18 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.118 20:27:18 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=2 00:27:29.118 20:27:18 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=2 00:27:29.118 20:27:18 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@134 -- # [[ 2 == 2 ]] 00:27:29.118 20:27:18 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@137 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:27:29.118 20:27:18 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.118 20:27:18 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:29.118 20:27:18 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.118 20:27:18 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@138 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null3 00:27:29.118 20:27:18 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.118 20:27:18 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:29.118 20:27:18 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.118 20:27:18 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@139 -- # sleep 1 00:27:30.509 20:27:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@141 -- # get_bdev_list 00:27:30.509 20:27:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:30.509 20:27:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:27:30.509 20:27:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.509 20:27:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:27:30.509 20:27:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:30.509 20:27:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:27:30.509 20:27:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.509 20:27:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@141 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:27:30.509 20:27:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@142 -- # get_notification_count 00:27:30.509 20:27:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:27:30.509 20:27:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.509 20:27:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:27:30.509 20:27:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:30.509 20:27:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.509 20:27:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=2 00:27:30.509 20:27:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=4 00:27:30.509 20:27:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@143 -- # [[ 2 == 2 ]] 00:27:30.509 20:27:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@147 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:27:30.509 20:27:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.509 20:27:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:30.509 [2024-07-14 20:27:19.273570] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:30.509 [2024-07-14 20:27:19.274089] bdev_nvme.c:6966:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:27:30.509 [2024-07-14 20:27:19.274121] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:27:30.509 [2024-07-14 20:27:19.274156] bdev_nvme.c:6966:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:27:30.509 [2024-07-14 20:27:19.274170] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:30.509 20:27:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.509 20:27:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@148 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4421 00:27:30.509 20:27:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.509 20:27:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:30.509 [2024-07-14 20:27:19.281448] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:27:30.509 [2024-07-14 20:27:19.282065] bdev_nvme.c:6966:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:27:30.509 [2024-07-14 20:27:19.282111] bdev_nvme.c:6966:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:27:30.509 20:27:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.509 20:27:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@149 -- # sleep 1 00:27:30.509 [2024-07-14 20:27:19.413159] bdev_nvme.c:6908:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new path for mdns0_nvme0 00:27:30.509 [2024-07-14 20:27:19.413361] bdev_nvme.c:6908:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for mdns1_nvme0 00:27:30.509 [2024-07-14 20:27:19.478465] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:27:30.509 [2024-07-14 20:27:19.478489] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:27:30.509 [2024-07-14 20:27:19.478495] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:27:30.509 [2024-07-14 20:27:19.478510] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:27:30.509 [2024-07-14 20:27:19.478549] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:27:30.509 [2024-07-14 20:27:19.478559] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:30.509 [2024-07-14 20:27:19.478563] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:30.509 [2024-07-14 20:27:19.478575] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:30.509 [2024-07-14 20:27:19.524267] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:27:30.509 [2024-07-14 20:27:19.524286] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:27:30.509 [2024-07-14 20:27:19.524324] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:30.509 [2024-07-14 20:27:19.524332] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:31.445 20:27:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@151 -- # get_subsystem_names 00:27:31.445 20:27:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:31.445 20:27:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:27:31.445 20:27:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:27:31.445 20:27:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.445 20:27:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:27:31.445 20:27:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:31.445 20:27:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.445 20:27:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@151 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:27:31.445 20:27:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@152 -- # get_bdev_list 00:27:31.445 20:27:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:31.445 20:27:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:27:31.445 20:27:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.445 20:27:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:31.445 20:27:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:27:31.445 20:27:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:27:31.445 20:27:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.445 20:27:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@152 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:27:31.445 20:27:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@153 -- # get_subsystem_paths mdns0_nvme0 00:27:31.445 20:27:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:31.445 20:27:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:27:31.445 20:27:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:27:31.445 20:27:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.445 20:27:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:31.445 20:27:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:27:31.445 20:27:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.445 20:27:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@153 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:27:31.445 20:27:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@154 -- # get_subsystem_paths mdns1_nvme0 00:27:31.445 20:27:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:27:31.445 20:27:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:31.445 20:27:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.445 20:27:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:31.445 20:27:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:27:31.445 20:27:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:27:31.445 20:27:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.445 20:27:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@154 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:27:31.445 20:27:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@155 -- # get_notification_count 00:27:31.445 20:27:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:27:31.445 20:27:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:27:31.445 20:27:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.445 20:27:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:31.706 20:27:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.706 20:27:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=0 00:27:31.706 20:27:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=4 00:27:31.706 20:27:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@156 -- # [[ 0 == 0 ]] 00:27:31.706 20:27:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@160 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:31.706 20:27:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.706 20:27:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:31.706 [2024-07-14 20:27:20.583061] bdev_nvme.c:6966:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:27:31.707 [2024-07-14 20:27:20.583099] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:27:31.707 [2024-07-14 20:27:20.583132] bdev_nvme.c:6966:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:27:31.707 [2024-07-14 20:27:20.583146] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:31.707 [2024-07-14 20:27:20.584645] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:31.707 [2024-07-14 20:27:20.584688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.707 [2024-07-14 20:27:20.584716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:31.707 [2024-07-14 20:27:20.584725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.707 [2024-07-14 20:27:20.584734] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:31.707 [2024-07-14 20:27:20.584743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.707 [2024-07-14 20:27:20.584753] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:31.707 [2024-07-14 20:27:20.584761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.707 [2024-07-14 20:27:20.584769] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66250 is same with the state(5) to be set 00:27:31.707 20:27:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.707 20:27:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@161 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:27:31.707 20:27:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.707 20:27:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:31.707 [2024-07-14 20:27:20.591081] bdev_nvme.c:6966:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:27:31.707 [2024-07-14 20:27:20.591134] bdev_nvme.c:6966:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:27:31.707 [2024-07-14 20:27:20.594608] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc66250 (9): Bad file descriptor 00:27:31.707 20:27:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.707 20:27:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@162 -- # sleep 1 00:27:31.707 [2024-07-14 20:27:20.597023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:31.707 [2024-07-14 20:27:20.597053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.707 [2024-07-14 20:27:20.597065] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:31.707 [2024-07-14 20:27:20.597074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.707 [2024-07-14 20:27:20.597085] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:31.707 [2024-07-14 20:27:20.597093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.707 [2024-07-14 20:27:20.597103] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:31.707 [2024-07-14 20:27:20.597111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.707 [2024-07-14 20:27:20.597119] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc42740 is same with the state(5) to be set 00:27:31.707 [2024-07-14 20:27:20.604628] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:31.707 [2024-07-14 20:27:20.604766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.707 [2024-07-14 20:27:20.604797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc66250 with addr=10.0.0.2, port=4420 00:27:31.707 [2024-07-14 20:27:20.604807] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66250 is same with the state(5) to be set 00:27:31.707 [2024-07-14 20:27:20.604823] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc66250 (9): Bad file descriptor 00:27:31.707 [2024-07-14 20:27:20.604838] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:31.707 [2024-07-14 20:27:20.604847] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:31.707 [2024-07-14 20:27:20.604858] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:31.707 [2024-07-14 20:27:20.604901] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.707 [2024-07-14 20:27:20.606991] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc42740 (9): Bad file descriptor 00:27:31.707 [2024-07-14 20:27:20.614697] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:31.707 [2024-07-14 20:27:20.614787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.707 [2024-07-14 20:27:20.614806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc66250 with addr=10.0.0.2, port=4420 00:27:31.707 [2024-07-14 20:27:20.614816] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66250 is same with the state(5) to be set 00:27:31.707 [2024-07-14 20:27:20.614830] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc66250 (9): Bad file descriptor 00:27:31.707 [2024-07-14 20:27:20.614842] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:31.707 [2024-07-14 20:27:20.614850] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:31.707 [2024-07-14 20:27:20.614859] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:31.707 [2024-07-14 20:27:20.614883] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.707 [2024-07-14 20:27:20.617000] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:27:31.707 [2024-07-14 20:27:20.617091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.707 [2024-07-14 20:27:20.617109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc42740 with addr=10.0.0.3, port=4420 00:27:31.707 [2024-07-14 20:27:20.617119] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc42740 is same with the state(5) to be set 00:27:31.707 [2024-07-14 20:27:20.617133] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc42740 (9): Bad file descriptor 00:27:31.707 [2024-07-14 20:27:20.617145] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:27:31.707 [2024-07-14 20:27:20.617153] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:27:31.707 [2024-07-14 20:27:20.617161] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:27:31.707 [2024-07-14 20:27:20.617174] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.707 [2024-07-14 20:27:20.624759] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:31.707 [2024-07-14 20:27:20.624846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.707 [2024-07-14 20:27:20.624864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc66250 with addr=10.0.0.2, port=4420 00:27:31.707 [2024-07-14 20:27:20.624889] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66250 is same with the state(5) to be set 00:27:31.707 [2024-07-14 20:27:20.624906] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc66250 (9): Bad file descriptor 00:27:31.707 [2024-07-14 20:27:20.624919] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:31.707 [2024-07-14 20:27:20.624927] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:31.707 [2024-07-14 20:27:20.624935] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:31.707 [2024-07-14 20:27:20.624947] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.707 [2024-07-14 20:27:20.627064] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:27:31.707 [2024-07-14 20:27:20.627139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.707 [2024-07-14 20:27:20.627158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc42740 with addr=10.0.0.3, port=4420 00:27:31.707 [2024-07-14 20:27:20.627168] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc42740 is same with the state(5) to be set 00:27:31.707 [2024-07-14 20:27:20.627182] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc42740 (9): Bad file descriptor 00:27:31.707 [2024-07-14 20:27:20.627207] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:27:31.707 [2024-07-14 20:27:20.627216] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:27:31.707 [2024-07-14 20:27:20.627224] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:27:31.707 [2024-07-14 20:27:20.627268] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.707 [2024-07-14 20:27:20.634821] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:31.707 [2024-07-14 20:27:20.634925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.707 [2024-07-14 20:27:20.634960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc66250 with addr=10.0.0.2, port=4420 00:27:31.707 [2024-07-14 20:27:20.634970] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66250 is same with the state(5) to be set 00:27:31.707 [2024-07-14 20:27:20.634985] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc66250 (9): Bad file descriptor 00:27:31.707 [2024-07-14 20:27:20.634998] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:31.707 [2024-07-14 20:27:20.635006] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:31.707 [2024-07-14 20:27:20.635014] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:31.707 [2024-07-14 20:27:20.635027] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.707 [2024-07-14 20:27:20.637112] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:27:31.707 [2024-07-14 20:27:20.637189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.707 [2024-07-14 20:27:20.637208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc42740 with addr=10.0.0.3, port=4420 00:27:31.707 [2024-07-14 20:27:20.637217] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc42740 is same with the state(5) to be set 00:27:31.707 [2024-07-14 20:27:20.637231] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc42740 (9): Bad file descriptor 00:27:31.707 [2024-07-14 20:27:20.637283] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:27:31.707 [2024-07-14 20:27:20.637296] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:27:31.707 [2024-07-14 20:27:20.637304] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:27:31.707 [2024-07-14 20:27:20.637317] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.707 [2024-07-14 20:27:20.644872] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:31.707 [2024-07-14 20:27:20.644967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.707 [2024-07-14 20:27:20.644986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc66250 with addr=10.0.0.2, port=4420 00:27:31.707 [2024-07-14 20:27:20.644995] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66250 is same with the state(5) to be set 00:27:31.708 [2024-07-14 20:27:20.645011] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc66250 (9): Bad file descriptor 00:27:31.708 [2024-07-14 20:27:20.645024] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:31.708 [2024-07-14 20:27:20.645032] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:31.708 [2024-07-14 20:27:20.645040] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:31.708 [2024-07-14 20:27:20.645052] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.708 [2024-07-14 20:27:20.647162] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:27:31.708 [2024-07-14 20:27:20.647250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.708 [2024-07-14 20:27:20.647269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc42740 with addr=10.0.0.3, port=4420 00:27:31.708 [2024-07-14 20:27:20.647278] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc42740 is same with the state(5) to be set 00:27:31.708 [2024-07-14 20:27:20.647292] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc42740 (9): Bad file descriptor 00:27:31.708 [2024-07-14 20:27:20.647319] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:27:31.708 [2024-07-14 20:27:20.647330] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:27:31.708 [2024-07-14 20:27:20.647338] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:27:31.708 [2024-07-14 20:27:20.647365] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.708 [2024-07-14 20:27:20.654954] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:31.708 [2024-07-14 20:27:20.655042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.708 [2024-07-14 20:27:20.655060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc66250 with addr=10.0.0.2, port=4420 00:27:31.708 [2024-07-14 20:27:20.655070] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66250 is same with the state(5) to be set 00:27:31.708 [2024-07-14 20:27:20.655084] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc66250 (9): Bad file descriptor 00:27:31.708 [2024-07-14 20:27:20.655097] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:31.708 [2024-07-14 20:27:20.655105] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:31.708 [2024-07-14 20:27:20.655113] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:31.708 [2024-07-14 20:27:20.655126] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.708 [2024-07-14 20:27:20.657207] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:27:31.708 [2024-07-14 20:27:20.657277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.708 [2024-07-14 20:27:20.657295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc42740 with addr=10.0.0.3, port=4420 00:27:31.708 [2024-07-14 20:27:20.657304] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc42740 is same with the state(5) to be set 00:27:31.708 [2024-07-14 20:27:20.657317] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc42740 (9): Bad file descriptor 00:27:31.708 [2024-07-14 20:27:20.657343] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:27:31.708 [2024-07-14 20:27:20.657353] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:27:31.708 [2024-07-14 20:27:20.657361] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:27:31.708 [2024-07-14 20:27:20.657373] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.708 [2024-07-14 20:27:20.664998] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:31.708 [2024-07-14 20:27:20.665085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.708 [2024-07-14 20:27:20.665102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc66250 with addr=10.0.0.2, port=4420 00:27:31.708 [2024-07-14 20:27:20.665111] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66250 is same with the state(5) to be set 00:27:31.708 [2024-07-14 20:27:20.665125] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc66250 (9): Bad file descriptor 00:27:31.708 [2024-07-14 20:27:20.665137] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:31.708 [2024-07-14 20:27:20.665144] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:31.708 [2024-07-14 20:27:20.665152] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:31.708 [2024-07-14 20:27:20.665165] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.708 [2024-07-14 20:27:20.667268] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:27:31.708 [2024-07-14 20:27:20.667357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.708 [2024-07-14 20:27:20.667390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc42740 with addr=10.0.0.3, port=4420 00:27:31.708 [2024-07-14 20:27:20.667399] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc42740 is same with the state(5) to be set 00:27:31.708 [2024-07-14 20:27:20.667413] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc42740 (9): Bad file descriptor 00:27:31.708 [2024-07-14 20:27:20.667442] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:27:31.708 [2024-07-14 20:27:20.667452] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:27:31.708 [2024-07-14 20:27:20.667462] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:27:31.708 [2024-07-14 20:27:20.667475] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.708 [2024-07-14 20:27:20.675043] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:31.708 [2024-07-14 20:27:20.675117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.708 [2024-07-14 20:27:20.675135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc66250 with addr=10.0.0.2, port=4420 00:27:31.708 [2024-07-14 20:27:20.675145] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66250 is same with the state(5) to be set 00:27:31.708 [2024-07-14 20:27:20.675159] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc66250 (9): Bad file descriptor 00:27:31.708 [2024-07-14 20:27:20.675171] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:31.708 [2024-07-14 20:27:20.675179] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:31.708 [2024-07-14 20:27:20.675188] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:31.708 [2024-07-14 20:27:20.675200] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.708 [2024-07-14 20:27:20.677314] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:27:31.708 [2024-07-14 20:27:20.677398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.708 [2024-07-14 20:27:20.677415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc42740 with addr=10.0.0.3, port=4420 00:27:31.708 [2024-07-14 20:27:20.677425] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc42740 is same with the state(5) to be set 00:27:31.708 [2024-07-14 20:27:20.677438] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc42740 (9): Bad file descriptor 00:27:31.708 [2024-07-14 20:27:20.677466] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:27:31.708 [2024-07-14 20:27:20.677476] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:27:31.708 [2024-07-14 20:27:20.677484] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:27:31.708 [2024-07-14 20:27:20.677497] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.708 [2024-07-14 20:27:20.685092] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:31.708 [2024-07-14 20:27:20.685186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.708 [2024-07-14 20:27:20.685205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc66250 with addr=10.0.0.2, port=4420 00:27:31.708 [2024-07-14 20:27:20.685214] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66250 is same with the state(5) to be set 00:27:31.708 [2024-07-14 20:27:20.685228] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc66250 (9): Bad file descriptor 00:27:31.708 [2024-07-14 20:27:20.685240] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:31.708 [2024-07-14 20:27:20.685248] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:31.708 [2024-07-14 20:27:20.685256] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:31.708 [2024-07-14 20:27:20.685269] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.708 [2024-07-14 20:27:20.687374] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:27:31.708 [2024-07-14 20:27:20.687460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.708 [2024-07-14 20:27:20.687477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc42740 with addr=10.0.0.3, port=4420 00:27:31.708 [2024-07-14 20:27:20.687487] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc42740 is same with the state(5) to be set 00:27:31.708 [2024-07-14 20:27:20.687501] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc42740 (9): Bad file descriptor 00:27:31.708 [2024-07-14 20:27:20.687528] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:27:31.708 [2024-07-14 20:27:20.687538] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:27:31.708 [2024-07-14 20:27:20.687547] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:27:31.708 [2024-07-14 20:27:20.687559] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.708 [2024-07-14 20:27:20.695142] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:31.708 [2024-07-14 20:27:20.695229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.708 [2024-07-14 20:27:20.695249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc66250 with addr=10.0.0.2, port=4420 00:27:31.708 [2024-07-14 20:27:20.695258] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66250 is same with the state(5) to be set 00:27:31.708 [2024-07-14 20:27:20.695272] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc66250 (9): Bad file descriptor 00:27:31.708 [2024-07-14 20:27:20.695285] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:31.708 [2024-07-14 20:27:20.695293] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:31.708 [2024-07-14 20:27:20.695301] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:31.708 [2024-07-14 20:27:20.695314] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.708 [2024-07-14 20:27:20.697417] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:27:31.708 [2024-07-14 20:27:20.697487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.708 [2024-07-14 20:27:20.697505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc42740 with addr=10.0.0.3, port=4420 00:27:31.708 [2024-07-14 20:27:20.697514] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc42740 is same with the state(5) to be set 00:27:31.708 [2024-07-14 20:27:20.697527] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc42740 (9): Bad file descriptor 00:27:31.709 [2024-07-14 20:27:20.697552] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:27:31.709 [2024-07-14 20:27:20.697563] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:27:31.709 [2024-07-14 20:27:20.697571] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:27:31.709 [2024-07-14 20:27:20.697583] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.709 [2024-07-14 20:27:20.705186] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:31.709 [2024-07-14 20:27:20.705272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.709 [2024-07-14 20:27:20.705290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc66250 with addr=10.0.0.2, port=4420 00:27:31.709 [2024-07-14 20:27:20.705299] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66250 is same with the state(5) to be set 00:27:31.709 [2024-07-14 20:27:20.705312] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc66250 (9): Bad file descriptor 00:27:31.709 [2024-07-14 20:27:20.705325] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:31.709 [2024-07-14 20:27:20.705332] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:31.709 [2024-07-14 20:27:20.705340] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:31.709 [2024-07-14 20:27:20.705352] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.709 [2024-07-14 20:27:20.707464] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:27:31.709 [2024-07-14 20:27:20.707549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.709 [2024-07-14 20:27:20.707567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc42740 with addr=10.0.0.3, port=4420 00:27:31.709 [2024-07-14 20:27:20.707578] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc42740 is same with the state(5) to be set 00:27:31.709 [2024-07-14 20:27:20.707591] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc42740 (9): Bad file descriptor 00:27:31.709 [2024-07-14 20:27:20.707619] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:27:31.709 [2024-07-14 20:27:20.707629] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:27:31.709 [2024-07-14 20:27:20.707638] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:27:31.709 [2024-07-14 20:27:20.707660] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.709 [2024-07-14 20:27:20.715233] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:31.709 [2024-07-14 20:27:20.715319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.709 [2024-07-14 20:27:20.715352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc66250 with addr=10.0.0.2, port=4420 00:27:31.709 [2024-07-14 20:27:20.715361] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66250 is same with the state(5) to be set 00:27:31.709 [2024-07-14 20:27:20.715374] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc66250 (9): Bad file descriptor 00:27:31.709 [2024-07-14 20:27:20.715387] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:31.709 [2024-07-14 20:27:20.715395] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:31.709 [2024-07-14 20:27:20.715402] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:31.709 [2024-07-14 20:27:20.715415] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.709 [2024-07-14 20:27:20.717509] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:27:31.709 [2024-07-14 20:27:20.717596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.709 [2024-07-14 20:27:20.717613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc42740 with addr=10.0.0.3, port=4420 00:27:31.709 [2024-07-14 20:27:20.717623] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc42740 is same with the state(5) to be set 00:27:31.709 [2024-07-14 20:27:20.717636] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc42740 (9): Bad file descriptor 00:27:31.709 [2024-07-14 20:27:20.717664] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:27:31.709 [2024-07-14 20:27:20.717674] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:27:31.709 [2024-07-14 20:27:20.717683] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:27:31.709 [2024-07-14 20:27:20.717696] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.709 [2024-07-14 20:27:20.722666] bdev_nvme.c:6771:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 not found 00:27:31.709 [2024-07-14 20:27:20.722692] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:27:31.709 [2024-07-14 20:27:20.722710] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:27:31.709 [2024-07-14 20:27:20.722741] bdev_nvme.c:6771:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:27:31.709 [2024-07-14 20:27:20.722755] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:31.709 [2024-07-14 20:27:20.722766] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:31.968 [2024-07-14 20:27:20.808774] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:27:31.968 [2024-07-14 20:27:20.808822] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:32.535 20:27:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # get_subsystem_names 00:27:32.535 20:27:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:32.535 20:27:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.535 20:27:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:32.535 20:27:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:27:32.535 20:27:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:27:32.535 20:27:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:27:32.535 20:27:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.794 20:27:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:27:32.794 20:27:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # get_bdev_list 00:27:32.794 20:27:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:32.794 20:27:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:27:32.794 20:27:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.794 20:27:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:32.794 20:27:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:27:32.794 20:27:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:27:32.794 20:27:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.794 20:27:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:27:32.794 20:27:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # get_subsystem_paths mdns0_nvme0 00:27:32.794 20:27:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:27:32.794 20:27:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.794 20:27:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:32.794 20:27:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:32.794 20:27:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:27:32.794 20:27:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:27:32.794 20:27:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.794 20:27:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # [[ 4421 == \4\4\2\1 ]] 00:27:32.794 20:27:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # get_subsystem_paths mdns1_nvme0 00:27:32.794 20:27:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:32.794 20:27:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:27:32.794 20:27:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.794 20:27:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:27:32.794 20:27:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:32.794 20:27:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:27:32.794 20:27:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.794 20:27:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # [[ 4421 == \4\4\2\1 ]] 00:27:32.794 20:27:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@168 -- # get_notification_count 00:27:32.794 20:27:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:27:32.794 20:27:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.794 20:27:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:27:32.794 20:27:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:32.794 20:27:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.794 20:27:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=0 00:27:32.794 20:27:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=4 00:27:32.794 20:27:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@169 -- # [[ 0 == 0 ]] 00:27:32.794 20:27:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@171 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:27:32.794 20:27:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.794 20:27:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:32.794 [2024-07-14 20:27:21.867950] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:27:32.794 20:27:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.794 20:27:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@172 -- # sleep 1 00:27:34.170 20:27:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@174 -- # get_mdns_discovery_svcs 00:27:34.170 20:27:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:27:34.170 20:27:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:27:34.170 20:27:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.170 20:27:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:27:34.170 20:27:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:34.170 20:27:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:27:34.170 20:27:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.170 20:27:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@174 -- # [[ '' == '' ]] 00:27:34.170 20:27:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@175 -- # get_subsystem_names 00:27:34.170 20:27:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:34.170 20:27:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.170 20:27:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:27:34.170 20:27:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:27:34.170 20:27:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:27:34.170 20:27:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:34.170 20:27:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.170 20:27:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@175 -- # [[ '' == '' ]] 00:27:34.170 20:27:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # get_bdev_list 00:27:34.170 20:27:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:34.170 20:27:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.170 20:27:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:34.170 20:27:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:27:34.170 20:27:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:27:34.170 20:27:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:27:34.170 20:27:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.170 20:27:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # [[ '' == '' ]] 00:27:34.170 20:27:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@177 -- # get_notification_count 00:27:34.170 20:27:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:27:34.170 20:27:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:27:34.170 20:27:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.170 20:27:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:34.170 20:27:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.170 20:27:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=4 00:27:34.170 20:27:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=8 00:27:34.170 20:27:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@178 -- # [[ 4 == 4 ]] 00:27:34.170 20:27:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@181 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:27:34.170 20:27:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.170 20:27:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:34.170 20:27:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.170 20:27:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@182 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:27:34.170 20:27:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@648 -- # local es=0 00:27:34.170 20:27:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:27:34.170 20:27:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:34.170 20:27:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:34.170 20:27:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:34.170 20:27:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:34.170 20:27:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:27:34.170 20:27:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.170 20:27:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:34.170 [2024-07-14 20:27:23.104197] bdev_mdns_client.c: 470:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running with name mdns 00:27:34.170 2024/07/14 20:27:23 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:mdns svcname:_nvme-disc._http], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:27:34.170 request: 00:27:34.170 { 00:27:34.170 "method": "bdev_nvme_start_mdns_discovery", 00:27:34.170 "params": { 00:27:34.170 "name": "mdns", 00:27:34.170 "svcname": "_nvme-disc._http", 00:27:34.170 "hostnqn": "nqn.2021-12.io.spdk:test" 00:27:34.170 } 00:27:34.170 } 00:27:34.170 Got JSON-RPC error response 00:27:34.170 GoRPCClient: error on JSON-RPC call 00:27:34.170 20:27:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:34.170 20:27:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # es=1 00:27:34.170 20:27:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:34.170 20:27:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:34.170 20:27:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:34.170 20:27:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@183 -- # sleep 5 00:27:34.735 [2024-07-14 20:27:23.692703] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:27:34.735 [2024-07-14 20:27:23.792700] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:27:34.992 [2024-07-14 20:27:23.892706] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:27:34.992 [2024-07-14 20:27:23.892726] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:27:34.992 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:27:34.992 cookie is 0 00:27:34.992 is_local: 1 00:27:34.992 our_own: 0 00:27:34.992 wide_area: 0 00:27:34.992 multicast: 1 00:27:34.992 cached: 1 00:27:34.992 [2024-07-14 20:27:23.992708] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:27:34.992 [2024-07-14 20:27:23.992727] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:27:34.992 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" "nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:27:34.992 cookie is 0 00:27:34.992 is_local: 1 00:27:34.992 our_own: 0 00:27:34.992 wide_area: 0 00:27:34.992 multicast: 1 00:27:34.992 cached: 1 00:27:34.992 [2024-07-14 20:27:23.992738] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 00:27:35.250 [2024-07-14 20:27:24.092706] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:27:35.250 [2024-07-14 20:27:24.092724] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:27:35.250 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:27:35.250 cookie is 0 00:27:35.250 is_local: 1 00:27:35.250 our_own: 0 00:27:35.250 wide_area: 0 00:27:35.250 multicast: 1 00:27:35.250 cached: 1 00:27:35.250 [2024-07-14 20:27:24.192709] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:27:35.250 [2024-07-14 20:27:24.192727] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:27:35.250 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" "nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:27:35.250 cookie is 0 00:27:35.250 is_local: 1 00:27:35.250 our_own: 0 00:27:35.250 wide_area: 0 00:27:35.250 multicast: 1 00:27:35.250 cached: 1 00:27:35.250 [2024-07-14 20:27:24.192736] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.2 trid->trsvcid: 8009 00:27:35.816 [2024-07-14 20:27:24.896583] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:27:35.816 [2024-07-14 20:27:24.896608] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:27:35.816 [2024-07-14 20:27:24.896626] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:27:36.074 [2024-07-14 20:27:24.982689] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new subsystem mdns0_nvme0 00:27:36.074 [2024-07-14 20:27:25.041992] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:27:36.074 [2024-07-14 20:27:25.042016] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:27:36.074 [2024-07-14 20:27:25.096384] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:36.074 [2024-07-14 20:27:25.096404] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:36.074 [2024-07-14 20:27:25.096420] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:36.333 [2024-07-14 20:27:25.182496] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem mdns1_nvme0 00:27:36.333 [2024-07-14 20:27:25.241153] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:27:36.333 [2024-07-14 20:27:25.241193] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:39.614 20:27:28 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@185 -- # get_mdns_discovery_svcs 00:27:39.614 20:27:28 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:27:39.614 20:27:28 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:27:39.614 20:27:28 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.614 20:27:28 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:27:39.614 20:27:28 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:39.614 20:27:28 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:27:39.614 20:27:28 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.614 20:27:28 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@185 -- # [[ mdns == \m\d\n\s ]] 00:27:39.614 20:27:28 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # get_discovery_ctrlrs 00:27:39.614 20:27:28 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:39.614 20:27:28 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.614 20:27:28 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:39.614 20:27:28 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:27:39.614 20:27:28 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:27:39.614 20:27:28 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:27:39.614 20:27:28 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.614 20:27:28 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:27:39.614 20:27:28 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # get_bdev_list 00:27:39.614 20:27:28 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:39.614 20:27:28 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:27:39.614 20:27:28 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:27:39.614 20:27:28 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.614 20:27:28 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:39.614 20:27:28 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:27:39.614 20:27:28 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.614 20:27:28 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:27:39.614 20:27:28 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@190 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:27:39.615 20:27:28 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@648 -- # local es=0 00:27:39.615 20:27:28 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:27:39.615 20:27:28 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:39.615 20:27:28 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:39.615 20:27:28 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:39.615 20:27:28 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:39.615 20:27:28 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:27:39.615 20:27:28 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.615 20:27:28 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:39.615 [2024-07-14 20:27:28.300462] bdev_mdns_client.c: 475:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running for service _nvme-disc._tcp 00:27:39.615 2024/07/14 20:27:28 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:cdc svcname:_nvme-disc._tcp], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:27:39.615 request: 00:27:39.615 { 00:27:39.615 "method": "bdev_nvme_start_mdns_discovery", 00:27:39.615 "params": { 00:27:39.615 "name": "cdc", 00:27:39.615 "svcname": "_nvme-disc._tcp", 00:27:39.615 "hostnqn": "nqn.2021-12.io.spdk:test" 00:27:39.615 } 00:27:39.615 } 00:27:39.615 Got JSON-RPC error response 00:27:39.615 GoRPCClient: error on JSON-RPC call 00:27:39.615 20:27:28 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:39.615 20:27:28 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # es=1 00:27:39.615 20:27:28 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:39.615 20:27:28 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:39.615 20:27:28 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:39.615 20:27:28 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@191 -- # get_discovery_ctrlrs 00:27:39.615 20:27:28 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:39.615 20:27:28 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.615 20:27:28 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:27:39.615 20:27:28 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:39.615 20:27:28 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:27:39.615 20:27:28 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:27:39.615 20:27:28 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.615 20:27:28 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@191 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:27:39.615 20:27:28 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@192 -- # get_bdev_list 00:27:39.615 20:27:28 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:39.615 20:27:28 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:27:39.615 20:27:28 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.615 20:27:28 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:39.615 20:27:28 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:27:39.615 20:27:28 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:27:39.615 20:27:28 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.615 20:27:28 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@192 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:27:39.615 20:27:28 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@193 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:27:39.615 20:27:28 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.615 20:27:28 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:39.615 20:27:28 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.615 20:27:28 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@195 -- # rpc_cmd nvmf_stop_mdns_prr 00:27:39.615 20:27:28 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.615 20:27:28 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:39.615 20:27:28 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.615 20:27:28 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@197 -- # trap - SIGINT SIGTERM EXIT 00:27:39.615 20:27:28 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@199 -- # kill 112775 00:27:39.615 20:27:28 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@202 -- # wait 112775 00:27:39.615 [2024-07-14 20:27:28.588602] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:27:39.615 20:27:28 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@203 -- # kill 112808 00:27:39.875 20:27:28 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@204 -- # nvmftestfini 00:27:39.875 20:27:28 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:39.875 20:27:28 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@117 -- # sync 00:27:39.875 Got SIGTERM, quitting. 00:27:39.875 Leaving mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:27:39.875 Leaving mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:27:39.875 avahi-daemon 0.8 exiting. 00:27:39.875 20:27:28 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:39.875 20:27:28 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@120 -- # set +e 00:27:39.875 20:27:28 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:39.875 20:27:28 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:39.875 rmmod nvme_tcp 00:27:39.875 rmmod nvme_fabrics 00:27:39.875 rmmod nvme_keyring 00:27:39.875 20:27:28 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:39.875 20:27:28 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@124 -- # set -e 00:27:39.875 20:27:28 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@125 -- # return 0 00:27:39.875 20:27:28 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@489 -- # '[' -n 112728 ']' 00:27:39.875 20:27:28 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@490 -- # killprocess 112728 00:27:39.875 20:27:28 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@946 -- # '[' -z 112728 ']' 00:27:39.875 20:27:28 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@950 -- # kill -0 112728 00:27:39.875 20:27:28 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@951 -- # uname 00:27:39.875 20:27:28 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:39.875 20:27:28 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 112728 00:27:39.875 20:27:28 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:39.875 20:27:28 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:39.875 killing process with pid 112728 00:27:39.875 20:27:28 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 112728' 00:27:39.875 20:27:28 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@965 -- # kill 112728 00:27:39.875 20:27:28 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@970 -- # wait 112728 00:27:40.134 20:27:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:40.134 20:27:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:40.134 20:27:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:40.134 20:27:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:40.134 20:27:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:40.134 20:27:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:40.134 20:27:29 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:40.134 20:27:29 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:40.134 20:27:29 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:27:40.134 00:27:40.134 real 0m20.795s 00:27:40.134 user 0m40.547s 00:27:40.134 sys 0m2.118s 00:27:40.134 20:27:29 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:40.134 ************************************ 00:27:40.134 END TEST nvmf_mdns_discovery 00:27:40.134 ************************************ 00:27:40.134 20:27:29 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:40.134 20:27:29 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 1 -eq 1 ]] 00:27:40.134 20:27:29 nvmf_tcp -- nvmf/nvmf.sh@117 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:27:40.134 20:27:29 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:27:40.134 20:27:29 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:40.134 20:27:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:40.394 ************************************ 00:27:40.394 START TEST nvmf_host_multipath 00:27:40.394 ************************************ 00:27:40.394 20:27:29 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:27:40.394 * Looking for test storage... 00:27:40.394 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:27:40.394 20:27:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:40.394 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:27:40.394 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:40.394 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:40.394 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:40.394 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:40.394 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:40.394 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:40.394 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:40.394 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:40.394 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:40.394 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:40.394 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:27:40.394 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:27:40.394 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:40.394 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:40.394 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:40.394 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:40.394 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:40.394 20:27:29 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:40.394 20:27:29 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:40.394 20:27:29 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:40.394 20:27:29 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:40.394 20:27:29 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:40.394 20:27:29 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:40.394 20:27:29 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:27:40.394 20:27:29 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:40.394 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@47 -- # : 0 00:27:40.394 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:40.394 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:40.394 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:40.394 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:40.394 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:40.394 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:40.394 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:40.394 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:40.394 20:27:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:40.394 20:27:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:40.394 20:27:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:40.394 20:27:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:27:40.394 20:27:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:40.394 20:27:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:27:40.394 20:27:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:27:40.394 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:40.394 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:40.394 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:40.394 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:40.394 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:40.394 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:40.394 20:27:29 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:40.394 20:27:29 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:40.394 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:27:40.394 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:27:40.394 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:27:40.394 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:27:40.394 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:27:40.394 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:27:40.394 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:40.394 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:40.394 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:27:40.394 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:27:40.394 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:40.394 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:40.394 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:40.394 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:40.394 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:40.394 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:40.394 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:40.394 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:40.394 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:27:40.394 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:27:40.394 Cannot find device "nvmf_tgt_br" 00:27:40.394 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # true 00:27:40.394 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:27:40.395 Cannot find device "nvmf_tgt_br2" 00:27:40.395 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # true 00:27:40.395 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:27:40.395 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:27:40.395 Cannot find device "nvmf_tgt_br" 00:27:40.395 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # true 00:27:40.395 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:27:40.395 Cannot find device "nvmf_tgt_br2" 00:27:40.395 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # true 00:27:40.395 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:27:40.395 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:27:40.395 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:40.395 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:40.395 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:27:40.395 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:40.395 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:40.395 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:27:40.395 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:27:40.654 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:40.654 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:40.654 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:40.654 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:40.654 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:40.654 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:40.654 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:27:40.654 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:27:40.654 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:27:40.654 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:27:40.654 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:27:40.654 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:27:40.654 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:40.654 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:40.654 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:40.654 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:27:40.654 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:27:40.654 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:27:40.654 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:40.654 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:40.654 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:40.654 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:40.654 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:27:40.654 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:40.654 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:27:40.654 00:27:40.654 --- 10.0.0.2 ping statistics --- 00:27:40.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:40.654 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:27:40.655 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:27:40.655 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:40.655 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:27:40.655 00:27:40.655 --- 10.0.0.3 ping statistics --- 00:27:40.655 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:40.655 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:27:40.655 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:40.655 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:40.655 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:27:40.655 00:27:40.655 --- 10.0.0.1 ping statistics --- 00:27:40.655 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:40.655 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:27:40.655 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:40.655 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@433 -- # return 0 00:27:40.655 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:40.655 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:40.655 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:40.655 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:40.655 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:40.655 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:40.655 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:40.655 20:27:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:27:40.655 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:40.655 20:27:29 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:40.655 20:27:29 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:27:40.655 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@481 -- # nvmfpid=113358 00:27:40.655 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:27:40.655 20:27:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@482 -- # waitforlisten 113358 00:27:40.655 20:27:29 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@827 -- # '[' -z 113358 ']' 00:27:40.655 20:27:29 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:40.655 20:27:29 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:40.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:40.655 20:27:29 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:40.655 20:27:29 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:40.655 20:27:29 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:27:40.914 [2024-07-14 20:27:29.778342] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:27:40.914 [2024-07-14 20:27:29.778459] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:40.914 [2024-07-14 20:27:29.920001] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:41.172 [2024-07-14 20:27:30.039517] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:41.173 [2024-07-14 20:27:30.039597] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:41.173 [2024-07-14 20:27:30.039608] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:41.173 [2024-07-14 20:27:30.039616] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:41.173 [2024-07-14 20:27:30.039623] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:41.173 [2024-07-14 20:27:30.040098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:41.173 [2024-07-14 20:27:30.040105] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:41.740 20:27:30 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:41.740 20:27:30 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@860 -- # return 0 00:27:41.740 20:27:30 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:41.740 20:27:30 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:41.740 20:27:30 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:27:41.998 20:27:30 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:41.998 20:27:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=113358 00:27:41.998 20:27:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:42.257 [2024-07-14 20:27:31.118131] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:42.257 20:27:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:27:42.515 Malloc0 00:27:42.515 20:27:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:27:42.774 20:27:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:43.033 20:27:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:43.290 [2024-07-14 20:27:32.174385] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:43.290 20:27:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:43.548 [2024-07-14 20:27:32.402467] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:43.548 20:27:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:27:43.548 20:27:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=113457 00:27:43.548 20:27:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:43.548 20:27:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 113457 /var/tmp/bdevperf.sock 00:27:43.548 20:27:32 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@827 -- # '[' -z 113457 ']' 00:27:43.548 20:27:32 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:43.548 20:27:32 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:43.548 20:27:32 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:43.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:43.548 20:27:32 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:43.548 20:27:32 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:27:44.483 20:27:33 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:44.483 20:27:33 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@860 -- # return 0 00:27:44.483 20:27:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:27:44.740 20:27:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:27:44.998 Nvme0n1 00:27:44.998 20:27:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:27:45.563 Nvme0n1 00:27:45.563 20:27:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:27:45.563 20:27:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:27:46.499 20:27:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:27:46.499 20:27:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:46.758 20:27:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:47.016 20:27:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:27:47.016 20:27:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 113358 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:27:47.016 20:27:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=113549 00:27:47.016 20:27:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:27:53.580 20:27:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:27:53.580 20:27:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:27:53.580 20:27:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:27:53.580 20:27:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:53.580 Attaching 4 probes... 00:27:53.580 @path[10.0.0.2, 4421]: 17843 00:27:53.580 @path[10.0.0.2, 4421]: 18307 00:27:53.580 @path[10.0.0.2, 4421]: 18582 00:27:53.580 @path[10.0.0.2, 4421]: 18388 00:27:53.580 @path[10.0.0.2, 4421]: 18339 00:27:53.580 20:27:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:27:53.580 20:27:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:27:53.580 20:27:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:27:53.580 20:27:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:27:53.580 20:27:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:27:53.580 20:27:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:27:53.580 20:27:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 113549 00:27:53.580 20:27:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:53.580 20:27:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:27:53.580 20:27:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:53.580 20:27:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:53.580 20:27:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:27:53.580 20:27:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 113358 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:27:53.580 20:27:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=113675 00:27:53.580 20:27:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:28:00.217 20:27:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:28:00.217 20:27:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:28:00.217 20:27:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:28:00.217 20:27:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:00.217 Attaching 4 probes... 00:28:00.217 @path[10.0.0.2, 4420]: 18966 00:28:00.217 @path[10.0.0.2, 4420]: 19478 00:28:00.217 @path[10.0.0.2, 4420]: 19152 00:28:00.217 @path[10.0.0.2, 4420]: 18961 00:28:00.217 @path[10.0.0.2, 4420]: 19090 00:28:00.217 20:27:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:28:00.217 20:27:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:28:00.217 20:27:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:28:00.217 20:27:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:28:00.217 20:27:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:28:00.217 20:27:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:28:00.217 20:27:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 113675 00:28:00.217 20:27:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:00.217 20:27:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:28:00.217 20:27:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:28:00.217 20:27:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:28:00.476 20:27:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:28:00.476 20:27:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 113358 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:28:00.476 20:27:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=113806 00:28:00.476 20:27:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:28:07.038 20:27:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:28:07.038 20:27:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:28:07.038 20:27:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:28:07.038 20:27:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:07.038 Attaching 4 probes... 00:28:07.038 @path[10.0.0.2, 4421]: 14578 00:28:07.038 @path[10.0.0.2, 4421]: 18546 00:28:07.038 @path[10.0.0.2, 4421]: 17995 00:28:07.038 @path[10.0.0.2, 4421]: 17996 00:28:07.038 @path[10.0.0.2, 4421]: 17741 00:28:07.038 20:27:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:28:07.038 20:27:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:28:07.038 20:27:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:28:07.038 20:27:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:28:07.038 20:27:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:28:07.038 20:27:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:28:07.038 20:27:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 113806 00:28:07.038 20:27:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:07.038 20:27:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:28:07.038 20:27:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:28:07.038 20:27:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:28:07.297 20:27:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:28:07.297 20:27:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=113932 00:28:07.297 20:27:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 113358 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:28:07.297 20:27:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:28:13.858 20:28:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:28:13.858 20:28:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:28:13.858 20:28:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:28:13.858 20:28:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:13.858 Attaching 4 probes... 00:28:13.858 00:28:13.858 00:28:13.858 00:28:13.858 00:28:13.858 00:28:13.858 20:28:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:28:13.858 20:28:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:28:13.858 20:28:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:28:13.858 20:28:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:28:13.858 20:28:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:28:13.858 20:28:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:28:13.858 20:28:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 113932 00:28:13.858 20:28:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:13.858 20:28:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:28:13.858 20:28:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:13.858 20:28:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:28:14.116 20:28:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:28:14.116 20:28:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=114062 00:28:14.116 20:28:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 113358 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:28:14.116 20:28:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:28:20.674 20:28:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:28:20.674 20:28:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:28:20.674 20:28:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:28:20.674 20:28:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:20.674 Attaching 4 probes... 00:28:20.674 @path[10.0.0.2, 4421]: 17773 00:28:20.674 @path[10.0.0.2, 4421]: 18415 00:28:20.674 @path[10.0.0.2, 4421]: 18131 00:28:20.674 @path[10.0.0.2, 4421]: 18249 00:28:20.674 @path[10.0.0.2, 4421]: 17587 00:28:20.674 20:28:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:28:20.674 20:28:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:28:20.674 20:28:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:28:20.674 20:28:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:28:20.674 20:28:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:28:20.674 20:28:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:28:20.674 20:28:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 114062 00:28:20.675 20:28:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:20.675 20:28:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:20.675 [2024-07-14 20:28:09.462474] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244e70 is same with the state(5) to be set 00:28:20.675 [2024-07-14 20:28:09.462542] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244e70 is same with the state(5) to be set 00:28:20.675 [2024-07-14 20:28:09.462570] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244e70 is same with the state(5) to be set 00:28:20.675 [2024-07-14 20:28:09.462580] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244e70 is same with the state(5) to be set 00:28:20.675 [2024-07-14 20:28:09.462588] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244e70 is same with the state(5) to be set 00:28:20.675 [2024-07-14 20:28:09.462595] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244e70 is same with the state(5) to be set 00:28:20.675 [2024-07-14 20:28:09.462603] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244e70 is same with the state(5) to be set 00:28:20.675 [2024-07-14 20:28:09.462612] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244e70 is same with the state(5) to be set 00:28:20.675 [2024-07-14 20:28:09.462620] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244e70 is same with the state(5) to be set 00:28:20.675 [2024-07-14 20:28:09.462627] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244e70 is same with the state(5) to be set 00:28:20.675 [2024-07-14 20:28:09.462634] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244e70 is same with the state(5) to be set 00:28:20.675 [2024-07-14 20:28:09.462641] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244e70 is same with the state(5) to be set 00:28:20.675 [2024-07-14 20:28:09.462649] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244e70 is same with the state(5) to be set 00:28:20.675 [2024-07-14 20:28:09.462657] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244e70 is same with the state(5) to be set 00:28:20.675 [2024-07-14 20:28:09.462664] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244e70 is same with the state(5) to be set 00:28:20.675 [2024-07-14 20:28:09.462671] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244e70 is same with the state(5) to be set 00:28:20.675 [2024-07-14 20:28:09.462678] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244e70 is same with the state(5) to be set 00:28:20.675 [2024-07-14 20:28:09.462685] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244e70 is same with the state(5) to be set 00:28:20.675 [2024-07-14 20:28:09.462693] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244e70 is same with the state(5) to be set 00:28:20.675 [2024-07-14 20:28:09.462700] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244e70 is same with the state(5) to be set 00:28:20.675 [2024-07-14 20:28:09.462708] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244e70 is same with the state(5) to be set 00:28:20.675 [2024-07-14 20:28:09.462716] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244e70 is same with the state(5) to be set 00:28:20.675 [2024-07-14 20:28:09.462723] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244e70 is same with the state(5) to be set 00:28:20.675 [2024-07-14 20:28:09.462739] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244e70 is same with the state(5) to be set 00:28:20.675 [2024-07-14 20:28:09.462746] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244e70 is same with the state(5) to be set 00:28:20.675 [2024-07-14 20:28:09.462752] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244e70 is same with the state(5) to be set 00:28:20.675 [2024-07-14 20:28:09.462768] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244e70 is same with the state(5) to be set 00:28:20.675 [2024-07-14 20:28:09.462775] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244e70 is same with the state(5) to be set 00:28:20.675 [2024-07-14 20:28:09.462782] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244e70 is same with the state(5) to be set 00:28:20.675 [2024-07-14 20:28:09.462789] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244e70 is same with the state(5) to be set 00:28:20.675 [2024-07-14 20:28:09.462796] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244e70 is same with the state(5) to be set 00:28:20.675 [2024-07-14 20:28:09.462804] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244e70 is same with the state(5) to be set 00:28:20.675 [2024-07-14 20:28:09.462811] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244e70 is same with the state(5) to be set 00:28:20.675 [2024-07-14 20:28:09.462820] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244e70 is same with the state(5) to be set 00:28:20.675 [2024-07-14 20:28:09.462829] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244e70 is same with the state(5) to be set 00:28:20.675 [2024-07-14 20:28:09.462836] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244e70 is same with the state(5) to be set 00:28:20.675 [2024-07-14 20:28:09.462843] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244e70 is same with the state(5) to be set 00:28:20.675 [2024-07-14 20:28:09.462851] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244e70 is same with the state(5) to be set 00:28:20.675 [2024-07-14 20:28:09.462858] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244e70 is same with the state(5) to be set 00:28:20.675 [2024-07-14 20:28:09.462865] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244e70 is same with the state(5) to be set 00:28:20.675 [2024-07-14 20:28:09.462907] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244e70 is same with the state(5) to be set 00:28:20.675 [2024-07-14 20:28:09.462915] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244e70 is same with the state(5) to be set 00:28:20.675 [2024-07-14 20:28:09.462923] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244e70 is same with the state(5) to be set 00:28:20.675 [2024-07-14 20:28:09.462954] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244e70 is same with the state(5) to be set 00:28:20.675 20:28:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:28:21.611 20:28:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:28:21.611 20:28:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=114198 00:28:21.611 20:28:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:28:21.611 20:28:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 113358 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:28:28.170 20:28:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:28:28.170 20:28:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:28:28.170 20:28:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:28:28.170 20:28:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:28.170 Attaching 4 probes... 00:28:28.170 @path[10.0.0.2, 4420]: 18528 00:28:28.170 @path[10.0.0.2, 4420]: 19128 00:28:28.170 @path[10.0.0.2, 4420]: 18499 00:28:28.170 @path[10.0.0.2, 4420]: 18922 00:28:28.170 @path[10.0.0.2, 4420]: 19400 00:28:28.170 20:28:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:28:28.170 20:28:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:28:28.170 20:28:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:28:28.170 20:28:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:28:28.170 20:28:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:28:28.170 20:28:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:28:28.170 20:28:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 114198 00:28:28.170 20:28:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:28.170 20:28:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:28.170 [2024-07-14 20:28:17.041718] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:28.170 20:28:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:28:28.428 20:28:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:28:34.990 20:28:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:28:34.990 20:28:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=114385 00:28:34.990 20:28:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:28:34.990 20:28:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 113358 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:28:40.259 20:28:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:28:40.259 20:28:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:28:40.518 20:28:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:28:40.518 20:28:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:40.518 Attaching 4 probes... 00:28:40.518 @path[10.0.0.2, 4421]: 18153 00:28:40.518 @path[10.0.0.2, 4421]: 18171 00:28:40.518 @path[10.0.0.2, 4421]: 18758 00:28:40.518 @path[10.0.0.2, 4421]: 18589 00:28:40.518 @path[10.0.0.2, 4421]: 18438 00:28:40.518 20:28:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:28:40.518 20:28:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:28:40.518 20:28:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:28:40.518 20:28:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:28:40.518 20:28:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:28:40.518 20:28:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:28:40.518 20:28:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 114385 00:28:40.518 20:28:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:40.518 20:28:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 113457 00:28:40.518 20:28:29 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@946 -- # '[' -z 113457 ']' 00:28:40.518 20:28:29 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@950 -- # kill -0 113457 00:28:40.518 20:28:29 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@951 -- # uname 00:28:40.518 20:28:29 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:40.518 20:28:29 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 113457 00:28:40.518 killing process with pid 113457 00:28:40.518 20:28:29 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:28:40.518 20:28:29 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:28:40.518 20:28:29 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@964 -- # echo 'killing process with pid 113457' 00:28:40.518 20:28:29 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@965 -- # kill 113457 00:28:40.518 20:28:29 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@970 -- # wait 113457 00:28:40.796 Connection closed with partial response: 00:28:40.796 00:28:40.796 00:28:40.796 20:28:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 113457 00:28:40.796 20:28:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:28:40.796 [2024-07-14 20:27:32.468564] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:28:40.796 [2024-07-14 20:27:32.468692] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113457 ] 00:28:40.796 [2024-07-14 20:27:32.604651] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:40.796 [2024-07-14 20:27:32.700051] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:40.796 Running I/O for 90 seconds... 00:28:40.796 [2024-07-14 20:27:42.590450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:84728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.796 [2024-07-14 20:27:42.590521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:40.796 [2024-07-14 20:27:42.590579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:83712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.796 [2024-07-14 20:27:42.590597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:40.796 [2024-07-14 20:27:42.590617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:83720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.796 [2024-07-14 20:27:42.590630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:40.796 [2024-07-14 20:27:42.590649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:83728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.796 [2024-07-14 20:27:42.590661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:40.796 [2024-07-14 20:27:42.590680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:83736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.796 [2024-07-14 20:27:42.590692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:40.796 [2024-07-14 20:27:42.590709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:83744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.796 [2024-07-14 20:27:42.590722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:40.796 [2024-07-14 20:27:42.590739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.796 [2024-07-14 20:27:42.590752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:40.796 [2024-07-14 20:27:42.590769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:83760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.796 [2024-07-14 20:27:42.590782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:40.796 [2024-07-14 20:27:42.590800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:83768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.796 [2024-07-14 20:27:42.590813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:40.796 [2024-07-14 20:27:42.590831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:83776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.796 [2024-07-14 20:27:42.590845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:40.796 [2024-07-14 20:27:42.590909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:83784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.796 [2024-07-14 20:27:42.590965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:40.796 [2024-07-14 20:27:42.590990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:83792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.796 [2024-07-14 20:27:42.591006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:40.796 [2024-07-14 20:27:42.591028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:83800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.796 [2024-07-14 20:27:42.591042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:40.796 [2024-07-14 20:27:42.591075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:83808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.796 [2024-07-14 20:27:42.591092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:40.796 [2024-07-14 20:27:42.591114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:83816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.796 [2024-07-14 20:27:42.591129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:40.796 [2024-07-14 20:27:42.591149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:83824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.796 [2024-07-14 20:27:42.591163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.796 [2024-07-14 20:27:42.591183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:83832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.796 [2024-07-14 20:27:42.591198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:40.796 [2024-07-14 20:27:42.591472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:83840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.796 [2024-07-14 20:27:42.591494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:40.796 [2024-07-14 20:27:42.591517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:83848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.796 [2024-07-14 20:27:42.591531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:40.796 [2024-07-14 20:27:42.591549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:83856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.796 [2024-07-14 20:27:42.591564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:40.796 [2024-07-14 20:27:42.591582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:83864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.796 [2024-07-14 20:27:42.591594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:40.796 [2024-07-14 20:27:42.591612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:83872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.796 [2024-07-14 20:27:42.591624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:40.796 [2024-07-14 20:27:42.591641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:83880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.796 [2024-07-14 20:27:42.591654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:40.796 [2024-07-14 20:27:42.591684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:83888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.796 [2024-07-14 20:27:42.591698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:40.796 [2024-07-14 20:27:42.591717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:83896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.796 [2024-07-14 20:27:42.591730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:40.796 [2024-07-14 20:27:42.591750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:83904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.796 [2024-07-14 20:27:42.591763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:40.796 [2024-07-14 20:27:42.591781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:83912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.796 [2024-07-14 20:27:42.591794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:40.796 [2024-07-14 20:27:42.591813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:83920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.796 [2024-07-14 20:27:42.591827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:40.796 [2024-07-14 20:27:42.591845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:83928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.796 [2024-07-14 20:27:42.591858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:40.796 [2024-07-14 20:27:42.591910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:83936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.796 [2024-07-14 20:27:42.591925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:40.796 [2024-07-14 20:27:42.591962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:83944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.796 [2024-07-14 20:27:42.591978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:40.796 [2024-07-14 20:27:42.591999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:83952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.796 [2024-07-14 20:27:42.592012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:40.796 [2024-07-14 20:27:42.592033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.796 [2024-07-14 20:27:42.592048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:40.796 [2024-07-14 20:27:42.592069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:83968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.796 [2024-07-14 20:27:42.592083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:40.797 [2024-07-14 20:27:42.592104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:83976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.797 [2024-07-14 20:27:42.592118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:40.797 [2024-07-14 20:27:42.592148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:83984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.797 [2024-07-14 20:27:42.592164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:40.797 [2024-07-14 20:27:42.592185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:83992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.797 [2024-07-14 20:27:42.592199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:40.797 [2024-07-14 20:27:42.592220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:84000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.797 [2024-07-14 20:27:42.592234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:40.797 [2024-07-14 20:27:42.592284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:84008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.797 [2024-07-14 20:27:42.592313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:40.797 [2024-07-14 20:27:42.592331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:84016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.797 [2024-07-14 20:27:42.592344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:40.797 [2024-07-14 20:27:42.592362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:84024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.797 [2024-07-14 20:27:42.592374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:40.797 [2024-07-14 20:27:42.592392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:84032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.797 [2024-07-14 20:27:42.592405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:40.797 [2024-07-14 20:27:42.592422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:84040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.797 [2024-07-14 20:27:42.592435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:40.797 [2024-07-14 20:27:42.592454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:84048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.797 [2024-07-14 20:27:42.592466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:40.797 [2024-07-14 20:27:42.592484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:84056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.797 [2024-07-14 20:27:42.592497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:40.797 [2024-07-14 20:27:42.592515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:84064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.797 [2024-07-14 20:27:42.592528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:40.797 [2024-07-14 20:27:42.592545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:84072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.797 [2024-07-14 20:27:42.592558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:40.797 [2024-07-14 20:27:42.592576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:84080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.797 [2024-07-14 20:27:42.592594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.797 [2024-07-14 20:27:42.592613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:84088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.797 [2024-07-14 20:27:42.592626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:40.797 [2024-07-14 20:27:42.592643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:84096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.797 [2024-07-14 20:27:42.592656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:40.797 [2024-07-14 20:27:42.592674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:84104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.797 [2024-07-14 20:27:42.592686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:40.797 [2024-07-14 20:27:42.592705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:84112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.797 [2024-07-14 20:27:42.592718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:40.797 [2024-07-14 20:27:42.592736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:84120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.797 [2024-07-14 20:27:42.592748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:40.797 [2024-07-14 20:27:42.592766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:84128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.797 [2024-07-14 20:27:42.592778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:40.797 [2024-07-14 20:27:42.592796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:84136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.797 [2024-07-14 20:27:42.592809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:40.797 [2024-07-14 20:27:42.592827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:84144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.797 [2024-07-14 20:27:42.592839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:40.797 [2024-07-14 20:27:42.592858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:84152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.797 [2024-07-14 20:27:42.592903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:40.797 [2024-07-14 20:27:42.592924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:84160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.797 [2024-07-14 20:27:42.592972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:40.797 [2024-07-14 20:27:42.592997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:84168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.797 [2024-07-14 20:27:42.593012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:40.797 [2024-07-14 20:27:42.593033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:84176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.797 [2024-07-14 20:27:42.593057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:40.797 [2024-07-14 20:27:42.593079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:84184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.797 [2024-07-14 20:27:42.593093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:40.797 [2024-07-14 20:27:42.593113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:84192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.797 [2024-07-14 20:27:42.593128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:40.797 [2024-07-14 20:27:42.593148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:84200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.797 [2024-07-14 20:27:42.593162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:40.797 [2024-07-14 20:27:42.593183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:84208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.797 [2024-07-14 20:27:42.593197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:40.797 [2024-07-14 20:27:42.593695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:84216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.797 [2024-07-14 20:27:42.593717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:40.797 [2024-07-14 20:27:42.593740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:84224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.797 [2024-07-14 20:27:42.593754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:40.797 [2024-07-14 20:27:42.593772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:84232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.797 [2024-07-14 20:27:42.593784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:40.797 [2024-07-14 20:27:42.593802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:84240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.797 [2024-07-14 20:27:42.593816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:40.797 [2024-07-14 20:27:42.593834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:84248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.797 [2024-07-14 20:27:42.593846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:40.797 [2024-07-14 20:27:42.593880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:84256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.797 [2024-07-14 20:27:42.593911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:40.797 [2024-07-14 20:27:42.593946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:84264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.797 [2024-07-14 20:27:42.593964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:40.797 [2024-07-14 20:27:42.593985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:84272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.797 [2024-07-14 20:27:42.594000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:40.797 [2024-07-14 20:27:42.594038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:84280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.798 [2024-07-14 20:27:42.594053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:40.798 [2024-07-14 20:27:42.594074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:84288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.798 [2024-07-14 20:27:42.594088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:40.798 [2024-07-14 20:27:42.594109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:84296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.798 [2024-07-14 20:27:42.594123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:40.798 [2024-07-14 20:27:42.594144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:84304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.798 [2024-07-14 20:27:42.594158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:40.798 [2024-07-14 20:27:42.594178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:84312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.798 [2024-07-14 20:27:42.594192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:40.798 [2024-07-14 20:27:42.594213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:84320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.798 [2024-07-14 20:27:42.594227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:40.798 [2024-07-14 20:27:42.594278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:84328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.798 [2024-07-14 20:27:42.594308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.798 [2024-07-14 20:27:42.594326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:84336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.798 [2024-07-14 20:27:42.594338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.798 [2024-07-14 20:27:42.594356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:84344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.798 [2024-07-14 20:27:42.594368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:40.798 [2024-07-14 20:27:42.594386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:84352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.798 [2024-07-14 20:27:42.594398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:40.798 [2024-07-14 20:27:42.594415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:84360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.798 [2024-07-14 20:27:42.594428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:40.798 [2024-07-14 20:27:42.594445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:84368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.798 [2024-07-14 20:27:42.594457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:40.798 [2024-07-14 20:27:42.594481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:84376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.798 [2024-07-14 20:27:42.594494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:40.798 [2024-07-14 20:27:42.594512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:84384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.798 [2024-07-14 20:27:42.594524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:40.798 [2024-07-14 20:27:42.594542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:84392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.798 [2024-07-14 20:27:42.594554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:40.798 [2024-07-14 20:27:42.594571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:84400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.798 [2024-07-14 20:27:42.594584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:40.798 [2024-07-14 20:27:42.594601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:84408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.798 [2024-07-14 20:27:42.594614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:40.798 [2024-07-14 20:27:42.594632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:84416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.798 [2024-07-14 20:27:42.594644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:40.798 [2024-07-14 20:27:42.594662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:84424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.798 [2024-07-14 20:27:42.594674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:40.798 [2024-07-14 20:27:42.594693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:84432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.798 [2024-07-14 20:27:42.594706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:40.798 [2024-07-14 20:27:42.594724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:84440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.798 [2024-07-14 20:27:42.594736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:40.798 [2024-07-14 20:27:42.594754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:84448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.798 [2024-07-14 20:27:42.594767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:40.798 [2024-07-14 20:27:42.594785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:84456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.798 [2024-07-14 20:27:42.594797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:40.798 [2024-07-14 20:27:42.594814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:84464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.798 [2024-07-14 20:27:42.594827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:40.798 [2024-07-14 20:27:42.594850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.798 [2024-07-14 20:27:42.594880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:40.798 [2024-07-14 20:27:42.594917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:84480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.798 [2024-07-14 20:27:42.594971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:40.798 [2024-07-14 20:27:42.594994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:84488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.798 [2024-07-14 20:27:42.595008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:40.798 [2024-07-14 20:27:42.595030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:84496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.798 [2024-07-14 20:27:42.595044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:40.798 [2024-07-14 20:27:42.595065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:84504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.798 [2024-07-14 20:27:42.595079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:40.798 [2024-07-14 20:27:42.595100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:84512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.798 [2024-07-14 20:27:42.595114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:40.798 [2024-07-14 20:27:42.595135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:84520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.798 [2024-07-14 20:27:42.595149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:40.798 [2024-07-14 20:27:42.595170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:84528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.798 [2024-07-14 20:27:42.595184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:40.798 [2024-07-14 20:27:42.595204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:84536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.798 [2024-07-14 20:27:42.595219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:40.798 [2024-07-14 20:27:42.595240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:84544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.798 [2024-07-14 20:27:42.595254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:40.798 [2024-07-14 20:27:42.595304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:84552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.798 [2024-07-14 20:27:42.595334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:40.798 [2024-07-14 20:27:42.595351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:84560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.798 [2024-07-14 20:27:42.595364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:40.798 [2024-07-14 20:27:42.595381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:84568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.798 [2024-07-14 20:27:42.595400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:40.798 [2024-07-14 20:27:42.595420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:84576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.798 [2024-07-14 20:27:42.595432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:40.798 [2024-07-14 20:27:42.595450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:84584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.798 [2024-07-14 20:27:42.595462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:40.798 [2024-07-14 20:27:42.595480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:84592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.799 [2024-07-14 20:27:42.595492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.799 [2024-07-14 20:27:42.595510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:84600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.799 [2024-07-14 20:27:42.595522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:40.799 [2024-07-14 20:27:42.595540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:84608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.799 [2024-07-14 20:27:42.595553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:40.799 [2024-07-14 20:27:42.595581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:84616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.799 [2024-07-14 20:27:42.595594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:40.799 [2024-07-14 20:27:42.595612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:84624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.799 [2024-07-14 20:27:42.595624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:40.799 [2024-07-14 20:27:42.595641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:84632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.799 [2024-07-14 20:27:42.595653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:40.799 [2024-07-14 20:27:42.595674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:84640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.799 [2024-07-14 20:27:42.595686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:40.799 [2024-07-14 20:27:42.595703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:84648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.799 [2024-07-14 20:27:42.595715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:40.799 [2024-07-14 20:27:42.595732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:84656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.799 [2024-07-14 20:27:42.595744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:40.799 [2024-07-14 20:27:42.595762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:84664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.799 [2024-07-14 20:27:42.595780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:40.799 [2024-07-14 20:27:42.595799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:84672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.799 [2024-07-14 20:27:42.595811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:40.799 [2024-07-14 20:27:42.595829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:84680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.799 [2024-07-14 20:27:42.595841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:40.799 [2024-07-14 20:27:42.595859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:84688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.799 [2024-07-14 20:27:42.595904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:40.799 [2024-07-14 20:27:42.595941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:84696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.799 [2024-07-14 20:27:42.595966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:40.799 [2024-07-14 20:27:42.595987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:84704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.799 [2024-07-14 20:27:42.596001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:40.799 [2024-07-14 20:27:42.596019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:84712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.799 [2024-07-14 20:27:42.596033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:40.799 [2024-07-14 20:27:42.596053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:84720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.799 [2024-07-14 20:27:42.596068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:40.799 [2024-07-14 20:27:49.135630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:61552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.799 [2024-07-14 20:27:49.135691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:40.799 [2024-07-14 20:27:49.135748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.799 [2024-07-14 20:27:49.135768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:40.799 [2024-07-14 20:27:49.135788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:61568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.799 [2024-07-14 20:27:49.135801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:40.799 [2024-07-14 20:27:49.135820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:61576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.799 [2024-07-14 20:27:49.135833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:40.799 [2024-07-14 20:27:49.135852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:61584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.799 [2024-07-14 20:27:49.135914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:40.799 [2024-07-14 20:27:49.135962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:61592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.799 [2024-07-14 20:27:49.135978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:40.799 [2024-07-14 20:27:49.135999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:61600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.799 [2024-07-14 20:27:49.136014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:40.799 [2024-07-14 20:27:49.136034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:61608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.799 [2024-07-14 20:27:49.136049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:40.799 [2024-07-14 20:27:49.136069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:61616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.799 [2024-07-14 20:27:49.136083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:40.799 [2024-07-14 20:27:49.136114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:61624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.799 [2024-07-14 20:27:49.136128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:40.799 [2024-07-14 20:27:49.136149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:61632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.799 [2024-07-14 20:27:49.136164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:40.799 [2024-07-14 20:27:49.136185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:61640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.799 [2024-07-14 20:27:49.136199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:40.799 [2024-07-14 20:27:49.136220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:61648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.799 [2024-07-14 20:27:49.136235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:40.799 [2024-07-14 20:27:49.136270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:61656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.799 [2024-07-14 20:27:49.136283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:40.799 [2024-07-14 20:27:49.136303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:61664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.799 [2024-07-14 20:27:49.136316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:40.799 [2024-07-14 20:27:49.136336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:61672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.799 [2024-07-14 20:27:49.136364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:40.799 [2024-07-14 20:27:49.136383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:61680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.799 [2024-07-14 20:27:49.136396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:40.799 [2024-07-14 20:27:49.136424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:61688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.799 [2024-07-14 20:27:49.136438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.799 [2024-07-14 20:27:49.136459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:61696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.799 [2024-07-14 20:27:49.136472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.799 [2024-07-14 20:27:49.136491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:61704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.799 [2024-07-14 20:27:49.136505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:40.799 [2024-07-14 20:27:49.136524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:61712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.799 [2024-07-14 20:27:49.136538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:40.799 [2024-07-14 20:27:49.136557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:61720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.799 [2024-07-14 20:27:49.136570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:40.799 [2024-07-14 20:27:49.136589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:61728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.799 [2024-07-14 20:27:49.136602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:40.800 [2024-07-14 20:27:49.136621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:61736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.800 [2024-07-14 20:27:49.136634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:40.800 [2024-07-14 20:27:49.136653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:61744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.800 [2024-07-14 20:27:49.136667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:40.800 [2024-07-14 20:27:49.136686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:61752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.800 [2024-07-14 20:27:49.136700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:40.800 [2024-07-14 20:27:49.136845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:61760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.800 [2024-07-14 20:27:49.136886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:40.800 [2024-07-14 20:27:49.136914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:61768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.800 [2024-07-14 20:27:49.136946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:40.800 [2024-07-14 20:27:49.136970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:61776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.800 [2024-07-14 20:27:49.136985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:40.800 [2024-07-14 20:27:49.137007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:61784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.800 [2024-07-14 20:27:49.137033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:40.800 [2024-07-14 20:27:49.137057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:61792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.800 [2024-07-14 20:27:49.137072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:40.800 [2024-07-14 20:27:49.137094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:61800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.800 [2024-07-14 20:27:49.137111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:40.800 [2024-07-14 20:27:49.137135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:61808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.800 [2024-07-14 20:27:49.137150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:40.800 [2024-07-14 20:27:49.137185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:61816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.800 [2024-07-14 20:27:49.137200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:40.800 [2024-07-14 20:27:49.137222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:61824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.800 [2024-07-14 20:27:49.137252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:40.800 [2024-07-14 20:27:49.137289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:61832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.800 [2024-07-14 20:27:49.137303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:40.800 [2024-07-14 20:27:49.137324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:61840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.800 [2024-07-14 20:27:49.137338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:40.800 [2024-07-14 20:27:49.137359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:61848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.800 [2024-07-14 20:27:49.137372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:40.800 [2024-07-14 20:27:49.137394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:61856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.800 [2024-07-14 20:27:49.137407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:40.800 [2024-07-14 20:27:49.137429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:61864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.800 [2024-07-14 20:27:49.137442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:40.800 [2024-07-14 20:27:49.137463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:61872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.800 [2024-07-14 20:27:49.137477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:40.800 [2024-07-14 20:27:49.137497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:61880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.800 [2024-07-14 20:27:49.137517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:40.800 [2024-07-14 20:27:49.137540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:61888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.800 [2024-07-14 20:27:49.137554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:40.800 [2024-07-14 20:27:49.137576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:61896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.800 [2024-07-14 20:27:49.137589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:40.800 [2024-07-14 20:27:49.137610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:61904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.800 [2024-07-14 20:27:49.137623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:40.800 [2024-07-14 20:27:49.137645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:61912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.800 [2024-07-14 20:27:49.137659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:40.800 [2024-07-14 20:27:49.137680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:61920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.800 [2024-07-14 20:27:49.137693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:40.800 [2024-07-14 20:27:49.137714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:61928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.800 [2024-07-14 20:27:49.137728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:40.800 [2024-07-14 20:27:49.137749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:61936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.800 [2024-07-14 20:27:49.137763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:40.800 [2024-07-14 20:27:49.137784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:61944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.800 [2024-07-14 20:27:49.137798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:40.800 [2024-07-14 20:27:49.137819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:61952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.800 [2024-07-14 20:27:49.137832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.800 [2024-07-14 20:27:49.137853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:61960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.800 [2024-07-14 20:27:49.137884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:40.801 [2024-07-14 20:27:49.137921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:61968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.801 [2024-07-14 20:27:49.137948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:40.801 [2024-07-14 20:27:49.137973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:61976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.801 [2024-07-14 20:27:49.137988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:40.801 [2024-07-14 20:27:49.138018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:61984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.801 [2024-07-14 20:27:49.138033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:40.801 [2024-07-14 20:27:49.138056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:61992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.801 [2024-07-14 20:27:49.138070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:40.801 [2024-07-14 20:27:49.138092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:62000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.801 [2024-07-14 20:27:49.138106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:40.801 [2024-07-14 20:27:49.138129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:62008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.801 [2024-07-14 20:27:49.138143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:40.801 [2024-07-14 20:27:49.138165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:62016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.801 [2024-07-14 20:27:49.138179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:40.801 [2024-07-14 20:27:49.138201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:62024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.801 [2024-07-14 20:27:49.138215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:40.801 [2024-07-14 20:27:49.138238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:62032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.801 [2024-07-14 20:27:49.138251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:40.801 [2024-07-14 20:27:49.140588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:62040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.801 [2024-07-14 20:27:49.140616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:40.801 [2024-07-14 20:27:49.140649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:62048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.801 [2024-07-14 20:27:49.140665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:40.801 [2024-07-14 20:27:56.113597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:100928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.801 [2024-07-14 20:27:56.113663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:40.801 [2024-07-14 20:27:56.115424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:100936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.801 [2024-07-14 20:27:56.115459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:40.801 [2024-07-14 20:27:56.115486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:100944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.801 [2024-07-14 20:27:56.115502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:40.801 [2024-07-14 20:27:56.115545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:100952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.801 [2024-07-14 20:27:56.115561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:40.801 [2024-07-14 20:27:56.115580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.801 [2024-07-14 20:27:56.115593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:40.801 [2024-07-14 20:27:56.115611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:100968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.801 [2024-07-14 20:27:56.115624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:40.801 [2024-07-14 20:27:56.115642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:100976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.801 [2024-07-14 20:27:56.115655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:40.801 [2024-07-14 20:27:56.115673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:100984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.801 [2024-07-14 20:27:56.115686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:40.801 [2024-07-14 20:27:56.115704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:100992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.801 [2024-07-14 20:27:56.115716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:40.801 [2024-07-14 20:27:56.115735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:101000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.801 [2024-07-14 20:27:56.115748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:40.801 [2024-07-14 20:27:56.115766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:101008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.801 [2024-07-14 20:27:56.115778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:40.801 [2024-07-14 20:27:56.115796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:101016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.801 [2024-07-14 20:27:56.115809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:40.801 [2024-07-14 20:27:56.115827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:101024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.801 [2024-07-14 20:27:56.115840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:40.801 [2024-07-14 20:27:56.115909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:101032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.801 [2024-07-14 20:27:56.115927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:40.801 [2024-07-14 20:27:56.115948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:101040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.801 [2024-07-14 20:27:56.115962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:40.801 [2024-07-14 20:27:56.115981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:101048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.801 [2024-07-14 20:27:56.116004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:40.801 [2024-07-14 20:27:56.116026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:101056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.801 [2024-07-14 20:27:56.116040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:40.801 [2024-07-14 20:27:56.116060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:101064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.801 [2024-07-14 20:27:56.116073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:40.801 [2024-07-14 20:27:56.116093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:101072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.801 [2024-07-14 20:27:56.116107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:40.801 [2024-07-14 20:27:56.116127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:101080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.801 [2024-07-14 20:27:56.116141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:40.801 [2024-07-14 20:27:56.116161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:101088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.801 [2024-07-14 20:27:56.116176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:40.801 [2024-07-14 20:27:56.116195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:101096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.801 [2024-07-14 20:27:56.116209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:40.801 [2024-07-14 20:27:56.116229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:101104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.801 [2024-07-14 20:27:56.116257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:40.801 [2024-07-14 20:27:56.116277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:101112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.801 [2024-07-14 20:27:56.116290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:40.801 [2024-07-14 20:27:56.116308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:101120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.801 [2024-07-14 20:27:56.116322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:40.801 [2024-07-14 20:27:56.116340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:101128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.801 [2024-07-14 20:27:56.116353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:40.801 [2024-07-14 20:27:56.116372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:101136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.801 [2024-07-14 20:27:56.116385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:40.801 [2024-07-14 20:27:56.116404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.802 [2024-07-14 20:27:56.116423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.802 [2024-07-14 20:27:56.116443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:101152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.802 [2024-07-14 20:27:56.116457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:40.802 [2024-07-14 20:27:56.116475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:101160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.802 [2024-07-14 20:27:56.116504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:40.802 [2024-07-14 20:27:56.116522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:101168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.802 [2024-07-14 20:27:56.116535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:40.802 [2024-07-14 20:27:56.116554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:101176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.802 [2024-07-14 20:27:56.116566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:40.802 [2024-07-14 20:27:56.116584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:101184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.802 [2024-07-14 20:27:56.116597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:40.802 [2024-07-14 20:27:56.116615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:101192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.802 [2024-07-14 20:27:56.116627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:40.802 [2024-07-14 20:27:56.116646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:101200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.802 [2024-07-14 20:27:56.116658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:40.802 [2024-07-14 20:27:56.116692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:101208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.802 [2024-07-14 20:27:56.116707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:40.802 [2024-07-14 20:27:56.116726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:101216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.802 [2024-07-14 20:27:56.116739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:40.802 [2024-07-14 20:27:56.116757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:101224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.802 [2024-07-14 20:27:56.116770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:40.802 [2024-07-14 20:27:56.116791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:101232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.802 [2024-07-14 20:27:56.116805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:40.802 [2024-07-14 20:27:56.116823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:101240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.802 [2024-07-14 20:27:56.116836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:40.802 [2024-07-14 20:27:56.116863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:101248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.802 [2024-07-14 20:27:56.116877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:40.802 [2024-07-14 20:27:56.116896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:101256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.802 [2024-07-14 20:27:56.116909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:40.802 [2024-07-14 20:27:56.116940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:101264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.802 [2024-07-14 20:27:56.116954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:40.802 [2024-07-14 20:27:56.116973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:101272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.802 [2024-07-14 20:27:56.116986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:40.802 [2024-07-14 20:27:56.117004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:101280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.802 [2024-07-14 20:27:56.117017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:40.802 [2024-07-14 20:27:56.117035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:101288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.802 [2024-07-14 20:27:56.117048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:40.802 [2024-07-14 20:27:56.117067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:101296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.802 [2024-07-14 20:27:56.117080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:40.802 [2024-07-14 20:27:56.117099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.802 [2024-07-14 20:27:56.117112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:40.802 [2024-07-14 20:27:56.117929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:101312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.802 [2024-07-14 20:27:56.117954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:40.802 [2024-07-14 20:27:56.117978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:101320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.802 [2024-07-14 20:27:56.117992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:40.802 [2024-07-14 20:27:56.118012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:101328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.802 [2024-07-14 20:27:56.118025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:40.802 [2024-07-14 20:27:56.118044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:101336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.802 [2024-07-14 20:27:56.118058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:40.802 [2024-07-14 20:27:56.118152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:101344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.802 [2024-07-14 20:27:56.118169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:40.802 [2024-07-14 20:27:56.118188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:101352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.802 [2024-07-14 20:27:56.118201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:40.802 [2024-07-14 20:27:56.118221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:101360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.802 [2024-07-14 20:27:56.118234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:40.802 [2024-07-14 20:27:56.118252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:101368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.802 [2024-07-14 20:27:56.118280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:40.802 [2024-07-14 20:27:56.118299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:101376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.802 [2024-07-14 20:27:56.118312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:40.802 [2024-07-14 20:27:56.118330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:101384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.802 [2024-07-14 20:27:56.118342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:40.802 [2024-07-14 20:27:56.118361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:101392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.802 [2024-07-14 20:27:56.118382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:40.802 [2024-07-14 20:27:56.118400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:101400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.802 [2024-07-14 20:27:56.118412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.802 [2024-07-14 20:27:56.118430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:101408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.802 [2024-07-14 20:27:56.118443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:40.802 [2024-07-14 20:27:56.118461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:101416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.802 [2024-07-14 20:27:56.118474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:40.802 [2024-07-14 20:27:56.118492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:101424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.803 [2024-07-14 20:27:56.118505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:40.803 [2024-07-14 20:27:56.118524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:101432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.803 [2024-07-14 20:27:56.118540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:40.803 [2024-07-14 20:27:56.118566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:101440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.803 [2024-07-14 20:27:56.118579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:40.803 [2024-07-14 20:27:56.118597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:101448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.803 [2024-07-14 20:27:56.118610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:40.803 [2024-07-14 20:27:56.118628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:101456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.803 [2024-07-14 20:27:56.118640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:40.803 [2024-07-14 20:27:56.118658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.803 [2024-07-14 20:27:56.118671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:40.803 [2024-07-14 20:27:56.118690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.803 [2024-07-14 20:27:56.118702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:40.803 [2024-07-14 20:27:56.118721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:101480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.803 [2024-07-14 20:27:56.118734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:40.803 [2024-07-14 20:27:56.118752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:101488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.803 [2024-07-14 20:27:56.118772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:40.803 [2024-07-14 20:27:56.118790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:101496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.803 [2024-07-14 20:27:56.118802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:40.803 [2024-07-14 20:27:56.118820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:101504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.803 [2024-07-14 20:27:56.118832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:40.803 [2024-07-14 20:27:56.118850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.803 [2024-07-14 20:27:56.118883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:40.803 [2024-07-14 20:27:56.118905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:101520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.803 [2024-07-14 20:27:56.118918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:40.803 [2024-07-14 20:27:56.118974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:101528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.803 [2024-07-14 20:27:56.118988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:40.803 [2024-07-14 20:27:56.119008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:101536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.803 [2024-07-14 20:27:56.119032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:40.803 [2024-07-14 20:27:56.119053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:101544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.803 [2024-07-14 20:27:56.119067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:40.803 [2024-07-14 20:27:56.119087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:101552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.803 [2024-07-14 20:27:56.119101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:40.803 [2024-07-14 20:27:56.119120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:101560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.803 [2024-07-14 20:27:56.119134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:40.803 [2024-07-14 20:27:56.119153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:101568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.803 [2024-07-14 20:27:56.119167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:40.803 [2024-07-14 20:27:56.119186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:101576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.803 [2024-07-14 20:27:56.119199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:40.803 [2024-07-14 20:27:56.119219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:101584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.803 [2024-07-14 20:27:56.119233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:40.803 [2024-07-14 20:27:56.119252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:101592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.803 [2024-07-14 20:27:56.119299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:40.803 [2024-07-14 20:27:56.119318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:101600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.803 [2024-07-14 20:27:56.119330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:40.803 [2024-07-14 20:27:56.119348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:101608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.803 [2024-07-14 20:27:56.119360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:40.803 [2024-07-14 20:27:56.119378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:101616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.803 [2024-07-14 20:27:56.119391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:40.803 [2024-07-14 20:27:56.119409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:101624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.803 [2024-07-14 20:27:56.119421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:40.803 [2024-07-14 20:27:56.119439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:101632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.803 [2024-07-14 20:27:56.119459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:40.803 [2024-07-14 20:27:56.119479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:101640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.803 [2024-07-14 20:27:56.119491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:40.803 [2024-07-14 20:27:56.119510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:101648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.803 [2024-07-14 20:27:56.119522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.803 [2024-07-14 20:27:56.119540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.803 [2024-07-14 20:27:56.119552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.803 [2024-07-14 20:27:56.119570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:101664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.803 [2024-07-14 20:27:56.119583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:40.803 [2024-07-14 20:27:56.119601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:101672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.803 [2024-07-14 20:27:56.119613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:40.803 [2024-07-14 20:27:56.119638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:101680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.803 [2024-07-14 20:27:56.119650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:40.803 [2024-07-14 20:27:56.119669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:101688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.803 [2024-07-14 20:27:56.119682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:40.803 [2024-07-14 20:27:56.119700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:100928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.803 [2024-07-14 20:27:56.119713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:40.803 [2024-07-14 20:27:56.119731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:101696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.803 [2024-07-14 20:27:56.119744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:40.803 [2024-07-14 20:27:56.119761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:101704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.803 [2024-07-14 20:27:56.119774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:40.803 [2024-07-14 20:27:56.119792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:101712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.803 [2024-07-14 20:27:56.119804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:40.803 [2024-07-14 20:27:56.119822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:101720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.803 [2024-07-14 20:27:56.119834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:40.803 [2024-07-14 20:27:56.119858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:101728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.803 [2024-07-14 20:27:56.119871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:40.803 [2024-07-14 20:27:56.119889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:101736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.803 [2024-07-14 20:27:56.119913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:40.803 [2024-07-14 20:27:56.119935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:101744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.804 [2024-07-14 20:27:56.119948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:40.804 [2024-07-14 20:27:56.120583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:101752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.804 [2024-07-14 20:27:56.120606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:40.804 [2024-07-14 20:27:56.120629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:101760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.804 [2024-07-14 20:27:56.120644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:40.804 [2024-07-14 20:27:56.120663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:101768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.804 [2024-07-14 20:27:56.120676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:40.804 [2024-07-14 20:27:56.120694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:101776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.804 [2024-07-14 20:27:56.120707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:40.804 [2024-07-14 20:27:56.120725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:101784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.804 [2024-07-14 20:27:56.120743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:40.804 [2024-07-14 20:27:56.120761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:101792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.804 [2024-07-14 20:27:56.120773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:40.804 [2024-07-14 20:27:56.120796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:101800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.804 [2024-07-14 20:27:56.120809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:40.804 [2024-07-14 20:27:56.120827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:101808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.804 [2024-07-14 20:27:56.120846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:40.804 [2024-07-14 20:27:56.120879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:101816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.804 [2024-07-14 20:27:56.120893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:40.804 [2024-07-14 20:27:56.120927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:101824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.804 [2024-07-14 20:27:56.120942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:40.804 [2024-07-14 20:27:56.120960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:101832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.804 [2024-07-14 20:27:56.120973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:40.804 [2024-07-14 20:27:56.120991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:101840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.804 [2024-07-14 20:27:56.121003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:40.804 [2024-07-14 20:27:56.121022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.804 [2024-07-14 20:27:56.121034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:40.804 [2024-07-14 20:27:56.121053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:101856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.804 [2024-07-14 20:27:56.121065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:40.804 [2024-07-14 20:27:56.121083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:101864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.804 [2024-07-14 20:27:56.121107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:40.804 [2024-07-14 20:27:56.121134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:101872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.804 [2024-07-14 20:27:56.121147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:40.804 [2024-07-14 20:27:56.121164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:101880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.804 [2024-07-14 20:27:56.121178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:40.804 [2024-07-14 20:27:56.121196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:101888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.804 [2024-07-14 20:27:56.121209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:40.804 [2024-07-14 20:27:56.121227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:101896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.804 [2024-07-14 20:27:56.121240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:40.804 [2024-07-14 20:27:56.121258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:101904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.804 [2024-07-14 20:27:56.121271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.804 [2024-07-14 20:27:56.121289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:101912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.804 [2024-07-14 20:27:56.121301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:40.804 [2024-07-14 20:27:56.121319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:101920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.804 [2024-07-14 20:27:56.121351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:40.804 [2024-07-14 20:27:56.121374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:101928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.804 [2024-07-14 20:27:56.121387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:40.804 [2024-07-14 20:27:56.121405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:101936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.804 [2024-07-14 20:27:56.121422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:40.804 [2024-07-14 20:27:56.121440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:101944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.804 [2024-07-14 20:27:56.121453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:40.804 [2024-07-14 20:27:56.121471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:100936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.804 [2024-07-14 20:27:56.121484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:40.804 [2024-07-14 20:27:56.121502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:100944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.804 [2024-07-14 20:27:56.121514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:40.804 [2024-07-14 20:27:56.121533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.804 [2024-07-14 20:27:56.121546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:40.804 [2024-07-14 20:27:56.121564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:100960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.804 [2024-07-14 20:27:56.121576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:40.804 [2024-07-14 20:27:56.121594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:100968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.804 [2024-07-14 20:27:56.121607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:40.804 [2024-07-14 20:27:56.121626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:100976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.804 [2024-07-14 20:27:56.121638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:40.804 [2024-07-14 20:27:56.121668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:100984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.804 [2024-07-14 20:27:56.121681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:40.804 [2024-07-14 20:27:56.121699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:100992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.804 [2024-07-14 20:27:56.121712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:40.804 [2024-07-14 20:27:56.121730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:101000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.804 [2024-07-14 20:27:56.121749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:40.804 [2024-07-14 20:27:56.121767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:101008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.804 [2024-07-14 20:27:56.121781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:40.804 [2024-07-14 20:27:56.121800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:101016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.804 [2024-07-14 20:27:56.121813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:40.804 [2024-07-14 20:27:56.121831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:101024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.804 [2024-07-14 20:27:56.121843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:40.804 [2024-07-14 20:27:56.121888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:101032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.804 [2024-07-14 20:27:56.121904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:40.804 [2024-07-14 20:27:56.121926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:101040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.804 [2024-07-14 20:27:56.121940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:40.805 [2024-07-14 20:27:56.121959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:101048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.805 [2024-07-14 20:27:56.121976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:40.805 [2024-07-14 20:27:56.121995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:101056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.805 [2024-07-14 20:27:56.122009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:40.805 [2024-07-14 20:27:56.122028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:101064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.805 [2024-07-14 20:27:56.122040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:40.805 [2024-07-14 20:27:56.122059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:101072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.805 [2024-07-14 20:27:56.122072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:40.805 [2024-07-14 20:27:56.122090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:101080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.805 [2024-07-14 20:27:56.122103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:40.805 [2024-07-14 20:27:56.122122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:101088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.805 [2024-07-14 20:27:56.122135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:40.805 [2024-07-14 20:27:56.122154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:101096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.805 [2024-07-14 20:27:56.122173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:40.805 [2024-07-14 20:27:56.122194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:101104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.805 [2024-07-14 20:27:56.122207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:40.805 [2024-07-14 20:27:56.122226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:101112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.805 [2024-07-14 20:27:56.122239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:40.805 [2024-07-14 20:27:56.122258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:101120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.805 [2024-07-14 20:27:56.122286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:40.805 [2024-07-14 20:27:56.122304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:101128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.805 [2024-07-14 20:27:56.122317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:40.805 [2024-07-14 20:27:56.122336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:101136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.805 [2024-07-14 20:27:56.122349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:40.805 [2024-07-14 20:27:56.122367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:101144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.805 [2024-07-14 20:27:56.122380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.805 [2024-07-14 20:27:56.122398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:101152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.805 [2024-07-14 20:27:56.122411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:40.805 [2024-07-14 20:27:56.122429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:101160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.805 [2024-07-14 20:27:56.122441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:40.805 [2024-07-14 20:27:56.122473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:101168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.805 [2024-07-14 20:27:56.122486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:40.805 [2024-07-14 20:27:56.122504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:101176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.805 [2024-07-14 20:27:56.122534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:40.805 [2024-07-14 20:27:56.122553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:101184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.805 [2024-07-14 20:27:56.122566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:40.805 [2024-07-14 20:27:56.122584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:101192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.805 [2024-07-14 20:27:56.122596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:40.805 [2024-07-14 20:27:56.122649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:101200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.805 [2024-07-14 20:27:56.122663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:40.805 [2024-07-14 20:27:56.122683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.805 [2024-07-14 20:27:56.122695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:40.805 [2024-07-14 20:27:56.122714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:101216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.805 [2024-07-14 20:27:56.122727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:40.805 [2024-07-14 20:27:56.122746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:101224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.805 [2024-07-14 20:27:56.122759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:40.805 [2024-07-14 20:27:56.122778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:101232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.805 [2024-07-14 20:27:56.122791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:40.805 [2024-07-14 20:27:56.122810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:101240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.805 [2024-07-14 20:27:56.122822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:40.805 [2024-07-14 20:27:56.122842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:101248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.805 [2024-07-14 20:27:56.122855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:40.805 [2024-07-14 20:27:56.122873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:101256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.805 [2024-07-14 20:27:56.122886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:40.805 [2024-07-14 20:27:56.122917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:101264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.805 [2024-07-14 20:27:56.122982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:40.805 [2024-07-14 20:27:56.123005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:101272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.805 [2024-07-14 20:27:56.123020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:40.805 [2024-07-14 20:27:56.123046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:101280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.805 [2024-07-14 20:27:56.123060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:40.805 [2024-07-14 20:27:56.123081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:101288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.805 [2024-07-14 20:27:56.123096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:40.805 [2024-07-14 20:27:56.123125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:101296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.805 [2024-07-14 20:27:56.123141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:40.805 [2024-07-14 20:27:56.123991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:101304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.805 [2024-07-14 20:27:56.124020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:40.805 [2024-07-14 20:27:56.124056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:101312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.805 [2024-07-14 20:27:56.124071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:40.805 [2024-07-14 20:27:56.124091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:101320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.805 [2024-07-14 20:27:56.124104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:40.805 [2024-07-14 20:27:56.124122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:101328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.805 [2024-07-14 20:27:56.124136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:40.805 [2024-07-14 20:27:56.124154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:101336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.805 [2024-07-14 20:27:56.124167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:40.806 [2024-07-14 20:27:56.124185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:101344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.806 [2024-07-14 20:27:56.124197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:40.806 [2024-07-14 20:27:56.124216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:101352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.806 [2024-07-14 20:27:56.124228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:40.806 [2024-07-14 20:27:56.124246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.806 [2024-07-14 20:27:56.124259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:40.806 [2024-07-14 20:27:56.124278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:101368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.806 [2024-07-14 20:27:56.124291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:40.806 [2024-07-14 20:27:56.124309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:101376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.806 [2024-07-14 20:27:56.124322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:40.806 [2024-07-14 20:27:56.124341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:101384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.806 [2024-07-14 20:27:56.124353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:40.806 [2024-07-14 20:27:56.124371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:101392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.806 [2024-07-14 20:27:56.124394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:40.806 [2024-07-14 20:27:56.124414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.806 [2024-07-14 20:27:56.124428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.806 [2024-07-14 20:27:56.124446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:101408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.806 [2024-07-14 20:27:56.124459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:40.806 [2024-07-14 20:27:56.124478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:101416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.806 [2024-07-14 20:27:56.124496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:40.806 [2024-07-14 20:27:56.124514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:101424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.806 [2024-07-14 20:27:56.124529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:40.806 [2024-07-14 20:27:56.124547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:101432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.806 [2024-07-14 20:27:56.124561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:40.806 [2024-07-14 20:27:56.124580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:101440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.806 [2024-07-14 20:27:56.124593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:40.806 [2024-07-14 20:27:56.124612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:101448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.806 [2024-07-14 20:27:56.124626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:40.806 [2024-07-14 20:27:56.124644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:101456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.806 [2024-07-14 20:27:56.124657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:40.806 [2024-07-14 20:27:56.124676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:101464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.806 [2024-07-14 20:27:56.124700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:40.806 [2024-07-14 20:27:56.124719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:101472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.806 [2024-07-14 20:27:56.124732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:40.806 [2024-07-14 20:27:56.124750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:101480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.806 [2024-07-14 20:27:56.124764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:40.806 [2024-07-14 20:27:56.124782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:101488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.806 [2024-07-14 20:27:56.124824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:40.806 [2024-07-14 20:27:56.124844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:101496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.806 [2024-07-14 20:27:56.124874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:40.806 [2024-07-14 20:27:56.124910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:101504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.806 [2024-07-14 20:27:56.124932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:40.806 [2024-07-14 20:27:56.124951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:101512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.806 [2024-07-14 20:27:56.124973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:40.806 [2024-07-14 20:27:56.124991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:101520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.806 [2024-07-14 20:27:56.125005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:40.806 [2024-07-14 20:27:56.125023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:101528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.806 [2024-07-14 20:27:56.125037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:40.806 [2024-07-14 20:27:56.125056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:101536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.806 [2024-07-14 20:27:56.125069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:40.806 [2024-07-14 20:27:56.125100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.806 [2024-07-14 20:27:56.125113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:40.806 [2024-07-14 20:27:56.125131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:101552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.806 [2024-07-14 20:27:56.125144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:40.806 [2024-07-14 20:27:56.125164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:101560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.806 [2024-07-14 20:27:56.125177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:40.806 [2024-07-14 20:27:56.125196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:101568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.806 [2024-07-14 20:27:56.125209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:40.806 [2024-07-14 20:27:56.125228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:101576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.806 [2024-07-14 20:27:56.125241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:40.806 [2024-07-14 20:27:56.125260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.806 [2024-07-14 20:27:56.125273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:40.806 [2024-07-14 20:27:56.125304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.806 [2024-07-14 20:27:56.125319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:40.806 [2024-07-14 20:27:56.125337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:101600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.806 [2024-07-14 20:27:56.125350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:40.806 [2024-07-14 20:27:56.125369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:101608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.806 [2024-07-14 20:27:56.125384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:40.806 [2024-07-14 20:27:56.125402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:101616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.806 [2024-07-14 20:27:56.125415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:40.806 [2024-07-14 20:27:56.125433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:101624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.806 [2024-07-14 20:27:56.125455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:40.806 [2024-07-14 20:27:56.125480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:101632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.806 [2024-07-14 20:27:56.125493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:40.806 [2024-07-14 20:27:56.125512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:101640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.806 [2024-07-14 20:27:56.125525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:40.806 [2024-07-14 20:27:56.125543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:101648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.806 [2024-07-14 20:27:56.125556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.806 [2024-07-14 20:27:56.125574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:101656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.806 [2024-07-14 20:27:56.125587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.806 [2024-07-14 20:27:56.132921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:101664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.806 [2024-07-14 20:27:56.132957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:40.806 [2024-07-14 20:27:56.132982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:101672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.806 [2024-07-14 20:27:56.133000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:40.807 [2024-07-14 20:27:56.133021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:101680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.807 [2024-07-14 20:27:56.133035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:40.807 [2024-07-14 20:27:56.133071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:101688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.807 [2024-07-14 20:27:56.133087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:40.807 [2024-07-14 20:27:56.133108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:100928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.807 [2024-07-14 20:27:56.133123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:40.807 [2024-07-14 20:27:56.133144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:101696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.807 [2024-07-14 20:27:56.133157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:40.807 [2024-07-14 20:27:56.133178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:101704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.807 [2024-07-14 20:27:56.133192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:40.807 [2024-07-14 20:27:56.133214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:101712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.807 [2024-07-14 20:27:56.133229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:40.807 [2024-07-14 20:27:56.133264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:101720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.807 [2024-07-14 20:27:56.133278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:40.807 [2024-07-14 20:27:56.133297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:101728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.807 [2024-07-14 20:27:56.133311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:40.807 [2024-07-14 20:27:56.133331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:101736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.807 [2024-07-14 20:27:56.133345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:40.807 [2024-07-14 20:27:56.134135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:101744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.807 [2024-07-14 20:27:56.134166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:40.807 [2024-07-14 20:27:56.134194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:101752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.807 [2024-07-14 20:27:56.134210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:40.807 [2024-07-14 20:27:56.134232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:101760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.807 [2024-07-14 20:27:56.134258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:40.807 [2024-07-14 20:27:56.134279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:101768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.807 [2024-07-14 20:27:56.134294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:40.807 [2024-07-14 20:27:56.134327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:101776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.807 [2024-07-14 20:27:56.134344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:40.807 [2024-07-14 20:27:56.134365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:101784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.807 [2024-07-14 20:27:56.134380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:40.807 [2024-07-14 20:27:56.134401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:101792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.807 [2024-07-14 20:27:56.134416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:40.807 [2024-07-14 20:27:56.134437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:101800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.807 [2024-07-14 20:27:56.134452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:40.807 [2024-07-14 20:27:56.134473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:101808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.807 [2024-07-14 20:27:56.134488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:40.807 [2024-07-14 20:27:56.134509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:101816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.807 [2024-07-14 20:27:56.134523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:40.807 [2024-07-14 20:27:56.134544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:101824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.807 [2024-07-14 20:27:56.134558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:40.807 [2024-07-14 20:27:56.134579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.807 [2024-07-14 20:27:56.134593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:40.807 [2024-07-14 20:27:56.134614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:101840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.807 [2024-07-14 20:27:56.134629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:40.807 [2024-07-14 20:27:56.134651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:101848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.807 [2024-07-14 20:27:56.134665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:40.807 [2024-07-14 20:27:56.134686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:101856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.807 [2024-07-14 20:27:56.134701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:40.807 [2024-07-14 20:27:56.134721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:101864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.807 [2024-07-14 20:27:56.134736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:40.807 [2024-07-14 20:27:56.134757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:101872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.807 [2024-07-14 20:27:56.134779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:40.807 [2024-07-14 20:27:56.134801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:101880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.807 [2024-07-14 20:27:56.134815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:40.807 [2024-07-14 20:27:56.134836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:101888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.807 [2024-07-14 20:27:56.134851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:40.807 [2024-07-14 20:27:56.134888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:101896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.807 [2024-07-14 20:27:56.134904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:40.807 [2024-07-14 20:27:56.134937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:101904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.807 [2024-07-14 20:27:56.134955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.807 [2024-07-14 20:27:56.134984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:101912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.807 [2024-07-14 20:27:56.134999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:40.807 [2024-07-14 20:27:56.135020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:101920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.807 [2024-07-14 20:27:56.135035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:40.807 [2024-07-14 20:27:56.135056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:101928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.807 [2024-07-14 20:27:56.135070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:40.807 [2024-07-14 20:27:56.135091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:101936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.807 [2024-07-14 20:27:56.135106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:40.807 [2024-07-14 20:27:56.135126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:101944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.807 [2024-07-14 20:27:56.135140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:40.807 [2024-07-14 20:27:56.135160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:100936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.807 [2024-07-14 20:27:56.135175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:40.807 [2024-07-14 20:27:56.135196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:100944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.807 [2024-07-14 20:27:56.135211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:40.807 [2024-07-14 20:27:56.135231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:100952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.807 [2024-07-14 20:27:56.135282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:40.807 [2024-07-14 20:27:56.135330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:100960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.807 [2024-07-14 20:27:56.135344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:40.807 [2024-07-14 20:27:56.135364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:100968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.807 [2024-07-14 20:27:56.135391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:40.807 [2024-07-14 20:27:56.135410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:100976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.807 [2024-07-14 20:27:56.135424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:40.807 [2024-07-14 20:27:56.135445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:100984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.807 [2024-07-14 20:27:56.135459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:40.808 [2024-07-14 20:27:56.135479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:100992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.808 [2024-07-14 20:27:56.135493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:40.808 [2024-07-14 20:27:56.135513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:101000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.808 [2024-07-14 20:27:56.135527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:40.808 [2024-07-14 20:27:56.135548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:101008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.808 [2024-07-14 20:27:56.135562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:40.808 [2024-07-14 20:27:56.135581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:101016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.808 [2024-07-14 20:27:56.135595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:40.808 [2024-07-14 20:27:56.135615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:101024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.808 [2024-07-14 20:27:56.135629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:40.808 [2024-07-14 20:27:56.135649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:101032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.808 [2024-07-14 20:27:56.135664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:40.808 [2024-07-14 20:27:56.135684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:101040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.808 [2024-07-14 20:27:56.135697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:40.808 [2024-07-14 20:27:56.135718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:101048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.808 [2024-07-14 20:27:56.135732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:40.808 [2024-07-14 20:27:56.135769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:101056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.808 [2024-07-14 20:27:56.135784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:40.808 [2024-07-14 20:27:56.135803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:101064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.808 [2024-07-14 20:27:56.135818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:40.808 [2024-07-14 20:27:56.135837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:101072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.808 [2024-07-14 20:27:56.135851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:40.808 [2024-07-14 20:27:56.135887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:101080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.808 [2024-07-14 20:27:56.135902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:40.808 [2024-07-14 20:27:56.135923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:101088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.808 [2024-07-14 20:27:56.135949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:40.808 [2024-07-14 20:27:56.135972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:101096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.808 [2024-07-14 20:27:56.135986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:40.808 [2024-07-14 20:27:56.136007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:101104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.808 [2024-07-14 20:27:56.136022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:40.808 [2024-07-14 20:27:56.136043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:101112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.808 [2024-07-14 20:27:56.136057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:40.808 [2024-07-14 20:27:56.136078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:101120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.808 [2024-07-14 20:27:56.136092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:40.808 [2024-07-14 20:27:56.136112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:101128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.808 [2024-07-14 20:27:56.136127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:40.808 [2024-07-14 20:27:56.136147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:101136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.808 [2024-07-14 20:27:56.136162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:40.808 [2024-07-14 20:27:56.136198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:101144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.808 [2024-07-14 20:27:56.136213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.808 [2024-07-14 20:27:56.136269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:101152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.808 [2024-07-14 20:27:56.136283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:40.808 [2024-07-14 20:27:56.136302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:101160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.808 [2024-07-14 20:27:56.136316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:40.808 [2024-07-14 20:27:56.136335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:101168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.808 [2024-07-14 20:27:56.136348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:40.808 [2024-07-14 20:27:56.136367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:101176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.808 [2024-07-14 20:27:56.136380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:40.808 [2024-07-14 20:27:56.136400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:101184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.808 [2024-07-14 20:27:56.136413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:40.808 [2024-07-14 20:27:56.136432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.808 [2024-07-14 20:27:56.136445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:40.808 [2024-07-14 20:27:56.136463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:101200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.808 [2024-07-14 20:27:56.136476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:40.808 [2024-07-14 20:27:56.136495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:101208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.808 [2024-07-14 20:27:56.136508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:40.808 [2024-07-14 20:27:56.136527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:101216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.808 [2024-07-14 20:27:56.136540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:40.808 [2024-07-14 20:27:56.136558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:101224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.808 [2024-07-14 20:27:56.136572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:40.808 [2024-07-14 20:27:56.136591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:101232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.808 [2024-07-14 20:27:56.136604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:40.808 [2024-07-14 20:27:56.136622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:101240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.808 [2024-07-14 20:27:56.136636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:40.808 [2024-07-14 20:27:56.136654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:101248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.808 [2024-07-14 20:27:56.136675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:40.808 [2024-07-14 20:27:56.136695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:101256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.808 [2024-07-14 20:27:56.136709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:40.808 [2024-07-14 20:27:56.136728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:101264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.808 [2024-07-14 20:27:56.136742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:40.808 [2024-07-14 20:27:56.136761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:101272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.808 [2024-07-14 20:27:56.136774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:40.808 [2024-07-14 20:27:56.136794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:101280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.808 [2024-07-14 20:27:56.136808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:40.808 [2024-07-14 20:27:56.136843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:101288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.808 [2024-07-14 20:27:56.136858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:40.808 [2024-07-14 20:27:56.137775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:101296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.808 [2024-07-14 20:27:56.137800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:40.808 [2024-07-14 20:27:56.137825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:101304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.808 [2024-07-14 20:27:56.137840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:40.808 [2024-07-14 20:27:56.137860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:101312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.808 [2024-07-14 20:27:56.137891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:40.808 [2024-07-14 20:27:56.137912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:101320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.808 [2024-07-14 20:27:56.137927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:40.809 [2024-07-14 20:27:56.137965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:101328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.809 [2024-07-14 20:27:56.137980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:40.809 [2024-07-14 20:27:56.138001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:101336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.809 [2024-07-14 20:27:56.138016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:40.809 [2024-07-14 20:27:56.138037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:101344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.809 [2024-07-14 20:27:56.138062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:40.809 [2024-07-14 20:27:56.138084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:101352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.809 [2024-07-14 20:27:56.138099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:40.809 [2024-07-14 20:27:56.138120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:101360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.809 [2024-07-14 20:27:56.138135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:40.809 [2024-07-14 20:27:56.138156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:101368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.809 [2024-07-14 20:27:56.138170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:40.809 [2024-07-14 20:27:56.138190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:101376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.809 [2024-07-14 20:27:56.138205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:40.809 [2024-07-14 20:27:56.138227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:101384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.809 [2024-07-14 20:27:56.138257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:40.809 [2024-07-14 20:27:56.138277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:101392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.809 [2024-07-14 20:27:56.138291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:40.809 [2024-07-14 20:27:56.138312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:101400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.809 [2024-07-14 20:27:56.138326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.809 [2024-07-14 20:27:56.138346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:101408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.809 [2024-07-14 20:27:56.138360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:40.809 [2024-07-14 20:27:56.138380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:101416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.809 [2024-07-14 20:27:56.138394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:40.809 [2024-07-14 20:27:56.138415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:101424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.809 [2024-07-14 20:27:56.138429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:40.809 [2024-07-14 20:27:56.138449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:101432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.809 [2024-07-14 20:27:56.138463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:40.809 [2024-07-14 20:27:56.138483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:101440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.809 [2024-07-14 20:27:56.138497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:40.809 [2024-07-14 20:27:56.138524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.809 [2024-07-14 20:27:56.138554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:40.809 [2024-07-14 20:27:56.138573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.809 [2024-07-14 20:27:56.138587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:40.809 [2024-07-14 20:27:56.138607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:101464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.809 [2024-07-14 20:27:56.138621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:40.809 [2024-07-14 20:27:56.138640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:101472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.809 [2024-07-14 20:27:56.138654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:40.809 [2024-07-14 20:27:56.138673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:101480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.809 [2024-07-14 20:27:56.138686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:40.809 [2024-07-14 20:27:56.138706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:101488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.809 [2024-07-14 20:27:56.138720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:40.809 [2024-07-14 20:27:56.138739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.809 [2024-07-14 20:27:56.138753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:40.809 [2024-07-14 20:27:56.138772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:101504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.809 [2024-07-14 20:27:56.138785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:40.809 [2024-07-14 20:27:56.138805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:101512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.809 [2024-07-14 20:27:56.138819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:40.809 [2024-07-14 20:27:56.138838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:101520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.809 [2024-07-14 20:27:56.138852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:40.809 [2024-07-14 20:27:56.138905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:101528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.809 [2024-07-14 20:27:56.138952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:40.809 [2024-07-14 20:27:56.138977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:101536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.809 [2024-07-14 20:27:56.138992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:40.809 [2024-07-14 20:27:56.139021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:101544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.809 [2024-07-14 20:27:56.139037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:40.809 [2024-07-14 20:27:56.139058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:101552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.809 [2024-07-14 20:27:56.139073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:40.809 [2024-07-14 20:27:56.139094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:101560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.809 [2024-07-14 20:27:56.139108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:40.809 [2024-07-14 20:27:56.139130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:101568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.809 [2024-07-14 20:27:56.139145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:40.809 [2024-07-14 20:27:56.139166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:101576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.809 [2024-07-14 20:27:56.139180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:40.809 [2024-07-14 20:27:56.139201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:101584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.809 [2024-07-14 20:27:56.139215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:40.809 [2024-07-14 20:27:56.139252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:101592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.809 [2024-07-14 20:27:56.139301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:40.809 [2024-07-14 20:27:56.139321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:101600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.809 [2024-07-14 20:27:56.139335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:40.809 [2024-07-14 20:27:56.139364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:101608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.810 [2024-07-14 20:27:56.139378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:40.810 [2024-07-14 20:27:56.139409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:101616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.810 [2024-07-14 20:27:56.139422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:40.810 [2024-07-14 20:27:56.139442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:101624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.810 [2024-07-14 20:27:56.139455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:40.810 [2024-07-14 20:27:56.139475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:101632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.810 [2024-07-14 20:27:56.139489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:40.810 [2024-07-14 20:27:56.139514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.810 [2024-07-14 20:27:56.139529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:40.810 [2024-07-14 20:27:56.139548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:101648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.810 [2024-07-14 20:27:56.139562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.810 [2024-07-14 20:27:56.139582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:101656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.810 [2024-07-14 20:27:56.139596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.810 [2024-07-14 20:27:56.139616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:101664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.810 [2024-07-14 20:27:56.139629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:40.810 [2024-07-14 20:27:56.139649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:101672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.810 [2024-07-14 20:27:56.139663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:40.810 [2024-07-14 20:27:56.139682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.810 [2024-07-14 20:27:56.139696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:40.810 [2024-07-14 20:27:56.139715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:101688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.810 [2024-07-14 20:27:56.139729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:40.810 [2024-07-14 20:27:56.139749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:100928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.810 [2024-07-14 20:27:56.139762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:40.810 [2024-07-14 20:27:56.139782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:101696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.810 [2024-07-14 20:27:56.139796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:40.810 [2024-07-14 20:27:56.139815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:101704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.810 [2024-07-14 20:27:56.139829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:40.810 [2024-07-14 20:27:56.139848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:101712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.810 [2024-07-14 20:27:56.139862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:40.810 [2024-07-14 20:27:56.139925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:101720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.810 [2024-07-14 20:27:56.139942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:40.810 [2024-07-14 20:27:56.139965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:101728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.810 [2024-07-14 20:27:56.139988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:40.810 [2024-07-14 20:27:56.140676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:101736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.810 [2024-07-14 20:27:56.140701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:40.810 [2024-07-14 20:27:56.140727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:101744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.810 [2024-07-14 20:27:56.140743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:40.810 [2024-07-14 20:27:56.140762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:101752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.810 [2024-07-14 20:27:56.140792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:40.810 [2024-07-14 20:27:56.140813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:101760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.810 [2024-07-14 20:27:56.140827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:40.810 [2024-07-14 20:27:56.140848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:101768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.810 [2024-07-14 20:27:56.140862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:40.810 [2024-07-14 20:27:56.140899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:101776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.810 [2024-07-14 20:27:56.140927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:40.810 [2024-07-14 20:27:56.140952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:101784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.810 [2024-07-14 20:27:56.140967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:40.810 [2024-07-14 20:27:56.140988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:101792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.810 [2024-07-14 20:27:56.141001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:40.810 [2024-07-14 20:27:56.141022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:101800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.810 [2024-07-14 20:27:56.141036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:40.810 [2024-07-14 20:27:56.141056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:101808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.810 [2024-07-14 20:27:56.141070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:40.810 [2024-07-14 20:27:56.141092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:101816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.810 [2024-07-14 20:27:56.141108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:40.810 [2024-07-14 20:27:56.141129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:101824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.810 [2024-07-14 20:27:56.141153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:40.810 [2024-07-14 20:27:56.141175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.810 [2024-07-14 20:27:56.141206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:40.810 [2024-07-14 20:27:56.141240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:101840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.810 [2024-07-14 20:27:56.141253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:40.810 [2024-07-14 20:27:56.141273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:101848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.810 [2024-07-14 20:27:56.141286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:40.810 [2024-07-14 20:27:56.141306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:101856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.810 [2024-07-14 20:27:56.141319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:40.810 [2024-07-14 20:27:56.141337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:101864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.810 [2024-07-14 20:27:56.141351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:40.810 [2024-07-14 20:27:56.141370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:101872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.810 [2024-07-14 20:27:56.141385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:40.810 [2024-07-14 20:27:56.141404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:101880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.810 [2024-07-14 20:27:56.141418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:40.810 [2024-07-14 20:27:56.141437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:101888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.810 [2024-07-14 20:27:56.141451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:40.810 [2024-07-14 20:27:56.141470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:101896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.810 [2024-07-14 20:27:56.141483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:40.810 [2024-07-14 20:27:56.141502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:101904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.810 [2024-07-14 20:27:56.141516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.810 [2024-07-14 20:27:56.141535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:101912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.810 [2024-07-14 20:27:56.141548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:40.810 [2024-07-14 20:27:56.141567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:101920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.810 [2024-07-14 20:27:56.141581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:40.810 [2024-07-14 20:27:56.141613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:101928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.811 [2024-07-14 20:27:56.141628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:40.811 [2024-07-14 20:27:56.141648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:101936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.811 [2024-07-14 20:27:56.141678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:40.811 [2024-07-14 20:27:56.141698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:101944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.811 [2024-07-14 20:27:56.141712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:40.811 [2024-07-14 20:27:56.141749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.811 [2024-07-14 20:27:56.141763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:40.811 [2024-07-14 20:27:56.141784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:100944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.811 [2024-07-14 20:27:56.141798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:40.811 [2024-07-14 20:27:56.141818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:100952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.811 [2024-07-14 20:27:56.141834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:40.811 [2024-07-14 20:27:56.141854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:100960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.811 [2024-07-14 20:27:56.141868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:40.811 [2024-07-14 20:27:56.141889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:100968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.811 [2024-07-14 20:27:56.141903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:40.811 [2024-07-14 20:27:56.141944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:100976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.811 [2024-07-14 20:27:56.141963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:40.811 [2024-07-14 20:27:56.141984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:100984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.811 [2024-07-14 20:27:56.141998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:40.811 [2024-07-14 20:27:56.142019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:100992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.811 [2024-07-14 20:27:56.142033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:40.811 [2024-07-14 20:27:56.142054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:101000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.811 [2024-07-14 20:27:56.142068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:40.811 [2024-07-14 20:27:56.142098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:101008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.811 [2024-07-14 20:27:56.142113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:40.811 [2024-07-14 20:27:56.142134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:101016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.811 [2024-07-14 20:27:56.142148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:40.811 [2024-07-14 20:27:56.142179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:101024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.811 [2024-07-14 20:27:56.142193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:40.811 [2024-07-14 20:27:56.142214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:101032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.811 [2024-07-14 20:27:56.142228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:40.811 [2024-07-14 20:27:56.142248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:101040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.811 [2024-07-14 20:27:56.142264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:40.811 [2024-07-14 20:27:56.142285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:101048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.811 [2024-07-14 20:27:56.142299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:40.811 [2024-07-14 20:27:56.142321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:101056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.811 [2024-07-14 20:27:56.142335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:40.811 [2024-07-14 20:27:56.142355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:101064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.811 [2024-07-14 20:27:56.142369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:40.811 [2024-07-14 20:27:56.142390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:101072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.811 [2024-07-14 20:27:56.142404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:40.811 [2024-07-14 20:27:56.142425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:101080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.811 [2024-07-14 20:27:56.142438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:40.811 [2024-07-14 20:27:56.142459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:101088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.811 [2024-07-14 20:27:56.142473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:40.811 [2024-07-14 20:27:56.142493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:101096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.811 [2024-07-14 20:27:56.142507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:40.811 [2024-07-14 20:27:56.142528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:101104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.811 [2024-07-14 20:27:56.142548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:40.811 [2024-07-14 20:27:56.142570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:101112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.811 [2024-07-14 20:27:56.142584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:40.811 [2024-07-14 20:27:56.142605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:101120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.811 [2024-07-14 20:27:56.142620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:40.811 [2024-07-14 20:27:56.142655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:101128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.811 [2024-07-14 20:27:56.142684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:40.811 [2024-07-14 20:27:56.142704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:101136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.811 [2024-07-14 20:27:56.142717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:40.811 [2024-07-14 20:27:56.142736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:101144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.811 [2024-07-14 20:27:56.142749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.811 [2024-07-14 20:27:56.142768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:101152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.811 [2024-07-14 20:27:56.142798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:40.811 [2024-07-14 20:27:56.142818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:101160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.811 [2024-07-14 20:27:56.142831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:40.811 [2024-07-14 20:27:56.142851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:101168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.811 [2024-07-14 20:27:56.142883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:40.811 [2024-07-14 20:27:56.142904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:101176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.811 [2024-07-14 20:27:56.142951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:40.811 [2024-07-14 20:27:56.142984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:101184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.811 [2024-07-14 20:27:56.142999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:40.811 [2024-07-14 20:27:56.143019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.811 [2024-07-14 20:27:56.143033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:40.811 [2024-07-14 20:27:56.143054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:101200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.811 [2024-07-14 20:27:56.143075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:40.811 [2024-07-14 20:27:56.143097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:101208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.811 [2024-07-14 20:27:56.143112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:40.811 [2024-07-14 20:27:56.143133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:101216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.811 [2024-07-14 20:27:56.143147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:40.811 [2024-07-14 20:27:56.143168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:101224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.811 [2024-07-14 20:27:56.143181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:40.811 [2024-07-14 20:27:56.143202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:101232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.811 [2024-07-14 20:27:56.143217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:40.812 [2024-07-14 20:27:56.143255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:101240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.812 [2024-07-14 20:27:56.143276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:40.812 [2024-07-14 20:27:56.143296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:101248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.812 [2024-07-14 20:27:56.143309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:40.812 [2024-07-14 20:27:56.143329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:101256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.812 [2024-07-14 20:27:56.143343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:40.812 [2024-07-14 20:27:56.143364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:101264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.812 [2024-07-14 20:27:56.143379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:40.812 [2024-07-14 20:27:56.143415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:101272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.812 [2024-07-14 20:27:56.143428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:40.812 [2024-07-14 20:27:56.143448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:101280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.812 [2024-07-14 20:27:56.143462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:40.812 [2024-07-14 20:27:56.144397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:101288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.812 [2024-07-14 20:27:56.144437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:40.812 [2024-07-14 20:27:56.144495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:101296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.812 [2024-07-14 20:27:56.144521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:40.812 [2024-07-14 20:27:56.144543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:101304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.812 [2024-07-14 20:27:56.144557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:40.812 [2024-07-14 20:27:56.144577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:101312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.812 [2024-07-14 20:27:56.144590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:40.812 [2024-07-14 20:27:56.144610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:101320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.812 [2024-07-14 20:27:56.144624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:40.812 [2024-07-14 20:27:56.144643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:101328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.812 [2024-07-14 20:27:56.144656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:40.812 [2024-07-14 20:27:56.144676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:101336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.812 [2024-07-14 20:27:56.144691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:40.812 [2024-07-14 20:27:56.144711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.812 [2024-07-14 20:27:56.144724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:40.812 [2024-07-14 20:27:56.144744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:101352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.812 [2024-07-14 20:27:56.144773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:40.812 [2024-07-14 20:27:56.144793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:101360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.812 [2024-07-14 20:27:56.144807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:40.812 [2024-07-14 20:27:56.144827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:101368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.812 [2024-07-14 20:27:56.144864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:40.812 [2024-07-14 20:27:56.144901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:101376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.812 [2024-07-14 20:27:56.144916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:40.812 [2024-07-14 20:27:56.144937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.812 [2024-07-14 20:27:56.144951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:40.812 [2024-07-14 20:27:56.144986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:101392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.812 [2024-07-14 20:27:56.145003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:40.812 [2024-07-14 20:27:56.145032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:101400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.812 [2024-07-14 20:27:56.145048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.812 [2024-07-14 20:27:56.145069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:101408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.812 [2024-07-14 20:27:56.145084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:40.812 [2024-07-14 20:27:56.145104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:101416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.812 [2024-07-14 20:27:56.145119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:40.812 [2024-07-14 20:27:56.145140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:101424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.812 [2024-07-14 20:27:56.145154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:40.812 [2024-07-14 20:27:56.145175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:101432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.812 [2024-07-14 20:27:56.145189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:40.812 [2024-07-14 20:27:56.145225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:101440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.812 [2024-07-14 20:27:56.145239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:40.812 [2024-07-14 20:27:56.145259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:101448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.812 [2024-07-14 20:27:56.145272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:40.812 [2024-07-14 20:27:56.145292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:101456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.812 [2024-07-14 20:27:56.145306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:40.812 [2024-07-14 20:27:56.145326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:101464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.812 [2024-07-14 20:27:56.145342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:40.812 [2024-07-14 20:27:56.145362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:101472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.812 [2024-07-14 20:27:56.145376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:40.812 [2024-07-14 20:27:56.145405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:101480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.812 [2024-07-14 20:27:56.145419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:40.812 [2024-07-14 20:27:56.145440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:101488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.812 [2024-07-14 20:27:56.145454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:40.812 [2024-07-14 20:27:56.145498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:101496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.812 [2024-07-14 20:27:56.145513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:40.812 [2024-07-14 20:27:56.145534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:101504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.812 [2024-07-14 20:27:56.145549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:40.812 [2024-07-14 20:27:56.145570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:101512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.812 [2024-07-14 20:27:56.145584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:40.812 [2024-07-14 20:27:56.145605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:101520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.812 [2024-07-14 20:27:56.145620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:40.812 [2024-07-14 20:27:56.145640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.812 [2024-07-14 20:27:56.145655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:40.812 [2024-07-14 20:27:56.145675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:101536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.812 [2024-07-14 20:27:56.145689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:40.812 [2024-07-14 20:27:56.145710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:101544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.812 [2024-07-14 20:27:56.145725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:40.812 [2024-07-14 20:27:56.145746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:101552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.812 [2024-07-14 20:27:56.145760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:40.812 [2024-07-14 20:27:56.145781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:101560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.812 [2024-07-14 20:27:56.145795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:40.812 [2024-07-14 20:27:56.145816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.813 [2024-07-14 20:27:56.145830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:40.813 [2024-07-14 20:27:56.145851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.813 [2024-07-14 20:27:56.145865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:40.813 [2024-07-14 20:27:56.145896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:101584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.813 [2024-07-14 20:27:56.145914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:40.813 [2024-07-14 20:27:56.145935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:101592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.813 [2024-07-14 20:27:56.145956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:40.813 [2024-07-14 20:27:56.145978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:101600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.813 [2024-07-14 20:27:56.145992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:40.813 [2024-07-14 20:27:56.146019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:101608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.813 [2024-07-14 20:27:56.146034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:40.813 [2024-07-14 20:27:56.146055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:101616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.813 [2024-07-14 20:27:56.146069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:40.813 [2024-07-14 20:27:56.146090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:101624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.813 [2024-07-14 20:27:56.146104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:40.813 [2024-07-14 20:27:56.146125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:101632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.813 [2024-07-14 20:27:56.146140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:40.813 [2024-07-14 20:27:56.146161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:101640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.813 [2024-07-14 20:27:56.146175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:40.813 [2024-07-14 20:27:56.146211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:101648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.813 [2024-07-14 20:27:56.146241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.813 [2024-07-14 20:27:56.146260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:101656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.813 [2024-07-14 20:27:56.146274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.813 [2024-07-14 20:27:56.146294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:101664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.813 [2024-07-14 20:27:56.146308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:40.813 [2024-07-14 20:27:56.146328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:101672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.813 [2024-07-14 20:27:56.146342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:40.813 [2024-07-14 20:27:56.146361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:101680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.813 [2024-07-14 20:27:56.146374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:40.813 [2024-07-14 20:27:56.146394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:101688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.813 [2024-07-14 20:27:56.146417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:40.813 [2024-07-14 20:27:56.146438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:100928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.813 [2024-07-14 20:27:56.146452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:40.813 [2024-07-14 20:27:56.146471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:101696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.813 [2024-07-14 20:27:56.146485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:40.813 [2024-07-14 20:27:56.146505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:101704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.813 [2024-07-14 20:27:56.146518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:40.813 [2024-07-14 20:27:56.146539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:101712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.813 [2024-07-14 20:27:56.146552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:40.813 [2024-07-14 20:27:56.146572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:101720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.813 [2024-07-14 20:27:56.146586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:40.813 [2024-07-14 20:27:56.147372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:101728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.813 [2024-07-14 20:27:56.147397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:40.813 [2024-07-14 20:27:56.147420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:101736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.813 [2024-07-14 20:27:56.147435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:40.813 [2024-07-14 20:27:56.147471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:101744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.813 [2024-07-14 20:27:56.147484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:40.813 [2024-07-14 20:27:56.147503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:101752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.813 [2024-07-14 20:27:56.147516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:40.813 [2024-07-14 20:27:56.147535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:101760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.813 [2024-07-14 20:27:56.147548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:40.813 [2024-07-14 20:27:56.147567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:101768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.813 [2024-07-14 20:27:56.147580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:40.813 [2024-07-14 20:27:56.147599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:101776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.813 [2024-07-14 20:27:56.147612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:40.813 [2024-07-14 20:27:56.147642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:101784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.813 [2024-07-14 20:27:56.147657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:40.813 [2024-07-14 20:27:56.147676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:101792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.813 [2024-07-14 20:27:56.147689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:40.813 [2024-07-14 20:27:56.147708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:101800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.813 [2024-07-14 20:27:56.147720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:40.813 [2024-07-14 20:27:56.147739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:101808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.813 [2024-07-14 20:27:56.147752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:40.813 [2024-07-14 20:27:56.147771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.814 [2024-07-14 20:27:56.147784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:40.814 [2024-07-14 20:27:56.147802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:101824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.814 [2024-07-14 20:27:56.147816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:40.814 [2024-07-14 20:27:56.147835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:101832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.814 [2024-07-14 20:27:56.147865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:40.814 [2024-07-14 20:27:56.147902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:101840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.814 [2024-07-14 20:27:56.147916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:40.814 [2024-07-14 20:27:56.147937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:101848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.814 [2024-07-14 20:27:56.147952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:40.814 [2024-07-14 20:27:56.147987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:101856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.814 [2024-07-14 20:27:56.148003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:40.814 [2024-07-14 20:27:56.148024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:101864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.814 [2024-07-14 20:27:56.148038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:40.814 [2024-07-14 20:27:56.148059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:101872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.814 [2024-07-14 20:27:56.148073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:40.814 [2024-07-14 20:27:56.148102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:101880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.814 [2024-07-14 20:27:56.148117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:40.814 [2024-07-14 20:27:56.148138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:101888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.814 [2024-07-14 20:27:56.148153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:40.814 [2024-07-14 20:27:56.148173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:101896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.814 [2024-07-14 20:27:56.148203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:40.814 [2024-07-14 20:27:56.148253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:101904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.814 [2024-07-14 20:27:56.148282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.814 [2024-07-14 20:27:56.148300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:101912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.814 [2024-07-14 20:27:56.148313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:40.814 [2024-07-14 20:27:56.148331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:101920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.814 [2024-07-14 20:27:56.148343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:40.814 [2024-07-14 20:27:56.148361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:101928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.814 [2024-07-14 20:27:56.148374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:40.814 [2024-07-14 20:27:56.148392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:101936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.814 [2024-07-14 20:27:56.148404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:40.814 [2024-07-14 20:27:56.148422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:101944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.814 [2024-07-14 20:27:56.148435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:40.814 [2024-07-14 20:27:56.148453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:100936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.814 [2024-07-14 20:27:56.148466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:40.814 [2024-07-14 20:27:56.148484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:100944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.814 [2024-07-14 20:27:56.148496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:40.814 [2024-07-14 20:27:56.148514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:100952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.814 [2024-07-14 20:27:56.148527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:40.814 [2024-07-14 20:27:56.148545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:100960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.814 [2024-07-14 20:27:56.148563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:40.814 [2024-07-14 20:27:56.148582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:100968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.814 [2024-07-14 20:27:56.148595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:40.814 [2024-07-14 20:27:56.148613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:100976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.814 [2024-07-14 20:27:56.148626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:40.814 [2024-07-14 20:27:56.148644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:100984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.814 [2024-07-14 20:27:56.148656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:40.814 [2024-07-14 20:27:56.148674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.814 [2024-07-14 20:27:56.148686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:40.814 [2024-07-14 20:27:56.148705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:101000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.814 [2024-07-14 20:27:56.148718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:40.814 [2024-07-14 20:27:56.148736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:101008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.814 [2024-07-14 20:27:56.148748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:40.814 [2024-07-14 20:27:56.148766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:101016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.814 [2024-07-14 20:27:56.148779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:40.814 [2024-07-14 20:27:56.148798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:101024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.814 [2024-07-14 20:27:56.148811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:40.814 [2024-07-14 20:27:56.148828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:101032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.814 [2024-07-14 20:27:56.148840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:40.814 [2024-07-14 20:27:56.148859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:101040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.814 [2024-07-14 20:27:56.148889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:40.814 [2024-07-14 20:27:56.148909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:101048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.814 [2024-07-14 20:27:56.148923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:40.814 [2024-07-14 20:27:56.148957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:101056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.814 [2024-07-14 20:27:56.148981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:40.814 [2024-07-14 20:27:56.149003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:101064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.814 [2024-07-14 20:27:56.149018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:40.814 [2024-07-14 20:27:56.149039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:101072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.814 [2024-07-14 20:27:56.149053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:40.814 [2024-07-14 20:27:56.149074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:101080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.814 [2024-07-14 20:27:56.149088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:40.814 [2024-07-14 20:27:56.149109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:101088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.814 [2024-07-14 20:27:56.149123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:40.814 [2024-07-14 20:27:56.149144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:101096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.814 [2024-07-14 20:27:56.149158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:40.814 [2024-07-14 20:27:56.149178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:101104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.814 [2024-07-14 20:27:56.149193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:40.814 [2024-07-14 20:27:56.149243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:101112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.814 [2024-07-14 20:27:56.149271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:40.814 [2024-07-14 20:27:56.149289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:101120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.814 [2024-07-14 20:27:56.149301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:40.814 [2024-07-14 20:27:56.149320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:101128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.814 [2024-07-14 20:27:56.149332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:40.815 [2024-07-14 20:27:56.149350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:101136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.815 [2024-07-14 20:27:56.149363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:40.815 [2024-07-14 20:27:56.149381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:101144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.815 [2024-07-14 20:27:56.149394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.815 [2024-07-14 20:27:56.149412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:101152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.815 [2024-07-14 20:27:56.149430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:40.815 [2024-07-14 20:27:56.149449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:101160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.815 [2024-07-14 20:27:56.149463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:40.815 [2024-07-14 20:27:56.149481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:101168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.815 [2024-07-14 20:27:56.149494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:40.815 [2024-07-14 20:27:56.149512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.815 [2024-07-14 20:27:56.149525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:40.815 [2024-07-14 20:27:56.149543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:101184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.815 [2024-07-14 20:27:56.149564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:40.815 [2024-07-14 20:27:56.149582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:101192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.815 [2024-07-14 20:27:56.149595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:40.815 [2024-07-14 20:27:56.149613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:101200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.815 [2024-07-14 20:27:56.149626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:40.815 [2024-07-14 20:27:56.149644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:101208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.815 [2024-07-14 20:27:56.149656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:40.815 [2024-07-14 20:27:56.149674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:101216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.815 [2024-07-14 20:27:56.149687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:40.815 [2024-07-14 20:27:56.149716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:101224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.815 [2024-07-14 20:27:56.149729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:40.815 [2024-07-14 20:27:56.149747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:101232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.815 [2024-07-14 20:27:56.149759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:40.815 [2024-07-14 20:27:56.149777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:101240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.815 [2024-07-14 20:27:56.149790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:40.815 [2024-07-14 20:27:56.149808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:101248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.815 [2024-07-14 20:27:56.149820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:40.815 [2024-07-14 20:27:56.149845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:101256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.815 [2024-07-14 20:27:56.149858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:40.815 [2024-07-14 20:27:56.149894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:101264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.815 [2024-07-14 20:27:56.149909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:40.815 [2024-07-14 20:27:56.149949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:101272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.815 [2024-07-14 20:27:56.149973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:40.815 [2024-07-14 20:27:56.150774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:101280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.815 [2024-07-14 20:27:56.150797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:40.815 [2024-07-14 20:27:56.150819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:101288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.815 [2024-07-14 20:27:56.150840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:40.815 [2024-07-14 20:27:56.150859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:101296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.815 [2024-07-14 20:27:56.150912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:40.815 [2024-07-14 20:27:56.150965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:101304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.815 [2024-07-14 20:27:56.150981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:40.815 [2024-07-14 20:27:56.151001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:101312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.815 [2024-07-14 20:27:56.151016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:40.815 [2024-07-14 20:27:56.151036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:101320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.815 [2024-07-14 20:27:56.151050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:40.815 [2024-07-14 20:27:56.151071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:101328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.815 [2024-07-14 20:27:56.151085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:40.815 [2024-07-14 20:27:56.151106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:101336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.815 [2024-07-14 20:27:56.151120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:40.815 [2024-07-14 20:27:56.151140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:101344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.815 [2024-07-14 20:27:56.151154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:40.815 [2024-07-14 20:27:56.151185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:101352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.815 [2024-07-14 20:27:56.151200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:40.815 [2024-07-14 20:27:56.151221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:101360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.815 [2024-07-14 20:27:56.151284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:40.815 [2024-07-14 20:27:56.151302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:101368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.815 [2024-07-14 20:27:56.151315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:40.815 [2024-07-14 20:27:56.151333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:101376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.815 [2024-07-14 20:27:56.151352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:40.815 [2024-07-14 20:27:56.151372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:101384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.815 [2024-07-14 20:27:56.151385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:40.815 [2024-07-14 20:27:56.151403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:101392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.815 [2024-07-14 20:27:56.151415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:40.815 [2024-07-14 20:27:56.151433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:101400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.815 [2024-07-14 20:27:56.151455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.815 [2024-07-14 20:27:56.151474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:101408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.815 [2024-07-14 20:27:56.151486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:40.815 [2024-07-14 20:27:56.151504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:101416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.815 [2024-07-14 20:27:56.151518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:40.815 [2024-07-14 20:27:56.151540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:101424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.815 [2024-07-14 20:27:56.151565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:40.815 [2024-07-14 20:27:56.151583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.815 [2024-07-14 20:27:56.151596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:40.815 [2024-07-14 20:27:56.151614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.815 [2024-07-14 20:27:56.151626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:40.815 [2024-07-14 20:27:56.151644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:101448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.815 [2024-07-14 20:27:56.151663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:40.815 [2024-07-14 20:27:56.151682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:101456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.815 [2024-07-14 20:27:56.151695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:40.815 [2024-07-14 20:27:56.151714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:101464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.816 [2024-07-14 20:27:56.151727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:40.816 [2024-07-14 20:27:56.151744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:101472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.816 [2024-07-14 20:27:56.151757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:40.816 [2024-07-14 20:27:56.151775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.816 [2024-07-14 20:27:56.151788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:40.816 [2024-07-14 20:27:56.151810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:101488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.816 [2024-07-14 20:27:56.151822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:40.816 [2024-07-14 20:27:56.151841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:101496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.816 [2024-07-14 20:27:56.151855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:40.816 [2024-07-14 20:27:56.151891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:101504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.816 [2024-07-14 20:27:56.151934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:40.816 [2024-07-14 20:27:56.151957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:101512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.816 [2024-07-14 20:27:56.151972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:40.816 [2024-07-14 20:27:56.151992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:101520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.816 [2024-07-14 20:27:56.152007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:40.816 [2024-07-14 20:27:56.152027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:101528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.816 [2024-07-14 20:27:56.152041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:40.816 [2024-07-14 20:27:56.152063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:101536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.816 [2024-07-14 20:27:56.152077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:40.816 [2024-07-14 20:27:56.152107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:101544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.816 [2024-07-14 20:27:56.152130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:40.816 [2024-07-14 20:27:56.152152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:101552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.816 [2024-07-14 20:27:56.152167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:40.816 [2024-07-14 20:27:56.152187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:101560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.816 [2024-07-14 20:27:56.152221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:40.816 [2024-07-14 20:27:56.152270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:101568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.816 [2024-07-14 20:27:56.152296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:40.816 [2024-07-14 20:27:56.152314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:101576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.816 [2024-07-14 20:27:56.152327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:40.816 [2024-07-14 20:27:56.152345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:101584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.816 [2024-07-14 20:27:56.152358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:40.816 [2024-07-14 20:27:56.152376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:101592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.816 [2024-07-14 20:27:56.152389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:40.816 [2024-07-14 20:27:56.152407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:101600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.816 [2024-07-14 20:27:56.152420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:40.816 [2024-07-14 20:27:56.152437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:101608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.816 [2024-07-14 20:27:56.152457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:40.816 [2024-07-14 20:27:56.152475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:101616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.816 [2024-07-14 20:27:56.152490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:40.816 [2024-07-14 20:27:56.152508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.816 [2024-07-14 20:27:56.152521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:40.816 [2024-07-14 20:27:56.152539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:101632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.816 [2024-07-14 20:27:56.152552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:40.816 [2024-07-14 20:27:56.152570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:101640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.816 [2024-07-14 20:27:56.152582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:40.816 [2024-07-14 20:27:56.152615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:101648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.816 [2024-07-14 20:27:56.152628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.816 [2024-07-14 20:27:56.152646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:101656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.816 [2024-07-14 20:27:56.152659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.816 [2024-07-14 20:27:56.152678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.816 [2024-07-14 20:27:56.152691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:40.816 [2024-07-14 20:27:56.152709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:101672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.816 [2024-07-14 20:27:56.152722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:40.816 [2024-07-14 20:27:56.152740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:101680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.816 [2024-07-14 20:27:56.152753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:40.816 [2024-07-14 20:27:56.152771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:101688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.816 [2024-07-14 20:27:56.152784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:40.816 [2024-07-14 20:27:56.152815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:100928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.816 [2024-07-14 20:27:56.152827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:40.816 [2024-07-14 20:27:56.152846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:101696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.816 [2024-07-14 20:27:56.152858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:40.816 [2024-07-14 20:27:56.152894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:101704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.816 [2024-07-14 20:27:56.152908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:40.816 [2024-07-14 20:27:56.152942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:101712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.816 [2024-07-14 20:27:56.152957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:40.816 [2024-07-14 20:27:56.153677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:101720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.816 [2024-07-14 20:27:56.153701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:40.816 [2024-07-14 20:27:56.153724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:101728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.816 [2024-07-14 20:27:56.153739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:40.816 [2024-07-14 20:27:56.153771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:101736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.816 [2024-07-14 20:27:56.153786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:40.816 [2024-07-14 20:27:56.153805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:101744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.816 [2024-07-14 20:27:56.153818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:40.816 [2024-07-14 20:27:56.153837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:101752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.816 [2024-07-14 20:27:56.153850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:40.816 [2024-07-14 20:27:56.153884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:101760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.816 [2024-07-14 20:27:56.153929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:40.816 [2024-07-14 20:27:56.153954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:101768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.816 [2024-07-14 20:27:56.153978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:40.816 [2024-07-14 20:27:56.153999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:101776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.816 [2024-07-14 20:27:56.154026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:40.816 [2024-07-14 20:27:56.154047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:101784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.816 [2024-07-14 20:27:56.154061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:40.817 [2024-07-14 20:27:56.154082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:101792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.817 [2024-07-14 20:27:56.154096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:40.817 [2024-07-14 20:27:56.154117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:101800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.817 [2024-07-14 20:27:56.154144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:40.817 [2024-07-14 20:27:56.154165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:101808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.817 [2024-07-14 20:27:56.154179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:40.817 [2024-07-14 20:27:56.154200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.817 [2024-07-14 20:27:56.154215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:40.817 [2024-07-14 20:27:56.154265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:101824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.817 [2024-07-14 20:27:56.154293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:40.817 [2024-07-14 20:27:56.154343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:101832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.817 [2024-07-14 20:27:56.154357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:40.817 [2024-07-14 20:27:56.154376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:101840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.817 [2024-07-14 20:27:56.154389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:40.817 [2024-07-14 20:27:56.154407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:101848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.817 [2024-07-14 20:27:56.154430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:40.817 [2024-07-14 20:27:56.154449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:101856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.817 [2024-07-14 20:27:56.154462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:40.817 [2024-07-14 20:27:56.154480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:101864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.817 [2024-07-14 20:27:56.154493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:40.817 [2024-07-14 20:27:56.154511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:101872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.817 [2024-07-14 20:27:56.154524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:40.817 [2024-07-14 20:27:56.154542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:101880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.817 [2024-07-14 20:27:56.154555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:40.817 [2024-07-14 20:27:56.154573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:101888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.817 [2024-07-14 20:27:56.154586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:40.817 [2024-07-14 20:27:56.154612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:101896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.817 [2024-07-14 20:27:56.154627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:40.817 [2024-07-14 20:27:56.154645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:101904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.817 [2024-07-14 20:27:56.154663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.817 [2024-07-14 20:27:56.154682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:101912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.817 [2024-07-14 20:27:56.154695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:40.817 [2024-07-14 20:27:56.154713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:101920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.817 [2024-07-14 20:27:56.154726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:40.817 [2024-07-14 20:27:56.154744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:101928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.817 [2024-07-14 20:27:56.154763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:40.817 [2024-07-14 20:27:56.154782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:101936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.817 [2024-07-14 20:27:56.154795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:40.817 [2024-07-14 20:27:56.154813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:101944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.817 [2024-07-14 20:27:56.154826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:40.817 [2024-07-14 20:27:56.154845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:100936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.817 [2024-07-14 20:27:56.154858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:40.817 [2024-07-14 20:27:56.154892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:100944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.817 [2024-07-14 20:27:56.154906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:40.817 [2024-07-14 20:27:56.154964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:100952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.817 [2024-07-14 20:27:56.154984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:40.817 [2024-07-14 20:27:56.155005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:100960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.817 [2024-07-14 20:27:56.155020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:40.817 [2024-07-14 20:27:56.155048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:100968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.817 [2024-07-14 20:27:56.155063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:40.817 [2024-07-14 20:27:56.155084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:100976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.817 [2024-07-14 20:27:56.155098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:40.817 [2024-07-14 20:27:56.155120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.817 [2024-07-14 20:27:56.155134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:40.817 [2024-07-14 20:27:56.155155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:100992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.817 [2024-07-14 20:27:56.155176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:40.818 [2024-07-14 20:27:56.155197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:101000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.818 [2024-07-14 20:27:56.155211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:40.818 [2024-07-14 20:27:56.155248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:101008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.818 [2024-07-14 20:27:56.155298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:40.818 [2024-07-14 20:27:56.155318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:101016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.818 [2024-07-14 20:27:56.155332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:40.818 [2024-07-14 20:27:56.155351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:101024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.818 [2024-07-14 20:27:56.155364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:40.818 [2024-07-14 20:27:56.155382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:101032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.818 [2024-07-14 20:27:56.155395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:40.818 [2024-07-14 20:27:56.155413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:101040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.818 [2024-07-14 20:27:56.155426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:40.818 [2024-07-14 20:27:56.155444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:101048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.818 [2024-07-14 20:27:56.155456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:40.818 [2024-07-14 20:27:56.155475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:101056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.818 [2024-07-14 20:27:56.155488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:40.818 [2024-07-14 20:27:56.155516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:101064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.818 [2024-07-14 20:27:56.155529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:40.818 [2024-07-14 20:27:56.155546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:101072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.818 [2024-07-14 20:27:56.155559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:40.818 [2024-07-14 20:27:56.155589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:101080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.818 [2024-07-14 20:27:56.155613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:40.818 [2024-07-14 20:27:56.155631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:101088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.818 [2024-07-14 20:27:56.155644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:40.818 [2024-07-14 20:27:56.155663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:101096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.818 [2024-07-14 20:27:56.155675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:40.818 [2024-07-14 20:27:56.155694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:101104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.818 [2024-07-14 20:27:56.155707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:40.818 [2024-07-14 20:27:56.155733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:101112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.818 [2024-07-14 20:27:56.155747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:40.818 [2024-07-14 20:27:56.155765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:101120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.818 [2024-07-14 20:27:56.155778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:40.818 [2024-07-14 20:27:56.155797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:101128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.818 [2024-07-14 20:27:56.155809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:40.818 [2024-07-14 20:27:56.155828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:101136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.818 [2024-07-14 20:27:56.155841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:40.818 [2024-07-14 20:27:56.155859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:101144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.818 [2024-07-14 20:27:56.155890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.818 [2024-07-14 20:27:56.155910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:101152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.818 [2024-07-14 20:27:56.155936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:40.818 [2024-07-14 20:27:56.155956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:101160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.818 [2024-07-14 20:27:56.155971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:40.818 [2024-07-14 20:27:56.155991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:101168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.818 [2024-07-14 20:27:56.156004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:40.818 [2024-07-14 20:27:56.156024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.818 [2024-07-14 20:27:56.156049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:40.818 [2024-07-14 20:27:56.156069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:101184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.818 [2024-07-14 20:27:56.156082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:40.818 [2024-07-14 20:27:56.156102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:101192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.818 [2024-07-14 20:27:56.156116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:40.818 [2024-07-14 20:27:56.156135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:101200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.818 [2024-07-14 20:27:56.156150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:40.818 [2024-07-14 20:27:56.156177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:101208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.818 [2024-07-14 20:27:56.156233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:40.818 [2024-07-14 20:27:56.156267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:101216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.818 [2024-07-14 20:27:56.156279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:40.818 [2024-07-14 20:27:56.156297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:101224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.818 [2024-07-14 20:27:56.156310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:40.818 [2024-07-14 20:27:56.156333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:101232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.818 [2024-07-14 20:27:56.156346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:40.818 [2024-07-14 20:27:56.156364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:101240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.818 [2024-07-14 20:27:56.156376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:40.818 [2024-07-14 20:27:56.156394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:101248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.818 [2024-07-14 20:27:56.156407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:40.818 [2024-07-14 20:27:56.156426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:101256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.818 [2024-07-14 20:27:56.156438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:40.818 [2024-07-14 20:27:56.156457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:101264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.818 [2024-07-14 20:27:56.156470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:40.818 [2024-07-14 20:27:56.157372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:101272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.818 [2024-07-14 20:27:56.157396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:40.818 [2024-07-14 20:27:56.157420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:101280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.818 [2024-07-14 20:27:56.157435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:40.818 [2024-07-14 20:27:56.157453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:101288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.818 [2024-07-14 20:27:56.157466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:40.818 [2024-07-14 20:27:56.157484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:101296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.818 [2024-07-14 20:27:56.157497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:40.818 [2024-07-14 20:27:56.157515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:101304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.818 [2024-07-14 20:27:56.157538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:40.818 [2024-07-14 20:27:56.157557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:101312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.818 [2024-07-14 20:27:56.157570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:40.818 [2024-07-14 20:27:56.157589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:101320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.818 [2024-07-14 20:27:56.157602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:40.818 [2024-07-14 20:27:56.157620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.819 [2024-07-14 20:27:56.157633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:40.819 [2024-07-14 20:27:56.157652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:101336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.819 [2024-07-14 20:27:56.157664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:40.819 [2024-07-14 20:27:56.157682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:101344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.819 [2024-07-14 20:27:56.157695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:40.819 [2024-07-14 20:27:56.157713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:101352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.819 [2024-07-14 20:27:56.157726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:40.819 [2024-07-14 20:27:56.157744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:101360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.819 [2024-07-14 20:27:56.157757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:40.819 [2024-07-14 20:27:56.157775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.819 [2024-07-14 20:27:56.157788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:40.819 [2024-07-14 20:27:56.157806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:101376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.819 [2024-07-14 20:27:56.157819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:40.819 [2024-07-14 20:27:56.157837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:101384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.819 [2024-07-14 20:27:56.157850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:40.819 [2024-07-14 20:27:56.157900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:101392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.819 [2024-07-14 20:27:56.157914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:40.819 [2024-07-14 20:27:56.157933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:101400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.819 [2024-07-14 20:27:56.157971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.819 [2024-07-14 20:27:56.157994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:101408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.819 [2024-07-14 20:27:56.158009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:40.819 [2024-07-14 20:27:56.158030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:101416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.819 [2024-07-14 20:27:56.158043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:40.819 [2024-07-14 20:27:56.158063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:101424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.819 [2024-07-14 20:27:56.158077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:40.819 [2024-07-14 20:27:56.158097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:101432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.819 [2024-07-14 20:27:56.158111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:40.819 [2024-07-14 20:27:56.158131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:101440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.819 [2024-07-14 20:27:56.158145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:40.819 [2024-07-14 20:27:56.158164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:101448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.819 [2024-07-14 20:27:56.158178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:40.819 [2024-07-14 20:27:56.158198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:101456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.819 [2024-07-14 20:27:56.158212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:40.819 [2024-07-14 20:27:56.158247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:101464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.819 [2024-07-14 20:27:56.158276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:40.819 [2024-07-14 20:27:56.158301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:101472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.819 [2024-07-14 20:27:56.158314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:40.819 [2024-07-14 20:27:56.158333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:101480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.819 [2024-07-14 20:27:56.158346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:40.819 [2024-07-14 20:27:56.158365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:101488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.819 [2024-07-14 20:27:56.158378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:40.819 [2024-07-14 20:27:56.158397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:101496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.819 [2024-07-14 20:27:56.158420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:40.819 [2024-07-14 20:27:56.158456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:101504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.819 [2024-07-14 20:27:56.158470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:40.819 [2024-07-14 20:27:56.158489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.819 [2024-07-14 20:27:56.158502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:40.819 [2024-07-14 20:27:56.158520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:101520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.819 [2024-07-14 20:27:56.158533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:40.819 [2024-07-14 20:27:56.158551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:101528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.819 [2024-07-14 20:27:56.158564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:40.819 [2024-07-14 20:27:56.158582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:101536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.819 [2024-07-14 20:27:56.158595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:40.819 [2024-07-14 20:27:56.158613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:101544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.819 [2024-07-14 20:27:56.158626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:40.819 [2024-07-14 20:27:56.158644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.819 [2024-07-14 20:27:56.158657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:40.819 [2024-07-14 20:27:56.158675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.819 [2024-07-14 20:27:56.158688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:40.819 [2024-07-14 20:27:56.158706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:101568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.819 [2024-07-14 20:27:56.158719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:40.819 [2024-07-14 20:27:56.158737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:101576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.819 [2024-07-14 20:27:56.158750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:40.819 [2024-07-14 20:27:56.158768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:101584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.819 [2024-07-14 20:27:56.158781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:40.819 [2024-07-14 20:27:56.158799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:101592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.819 [2024-07-14 20:27:56.158812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:40.819 [2024-07-14 20:27:56.158837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:101600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.819 [2024-07-14 20:27:56.158851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:40.819 [2024-07-14 20:27:56.158904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:101608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.819 [2024-07-14 20:27:56.158918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:40.819 [2024-07-14 20:27:56.158981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:101616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.819 [2024-07-14 20:27:56.158997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:40.819 [2024-07-14 20:27:56.159018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:101624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.819 [2024-07-14 20:27:56.159033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:40.819 [2024-07-14 20:27:56.159054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:101632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.819 [2024-07-14 20:27:56.159069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:40.819 [2024-07-14 20:27:56.159089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:101640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.819 [2024-07-14 20:27:56.159104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:40.819 [2024-07-14 20:27:56.159125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:101648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.819 [2024-07-14 20:27:56.159139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.819 [2024-07-14 20:27:56.159160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:101656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.819 [2024-07-14 20:27:56.159174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.820 [2024-07-14 20:27:56.159195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:101664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.820 [2024-07-14 20:27:56.159210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:40.820 [2024-07-14 20:27:56.159230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:101672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.820 [2024-07-14 20:27:56.159258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:40.820 [2024-07-14 20:27:56.159295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:101680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.820 [2024-07-14 20:27:56.159313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:40.820 [2024-07-14 20:27:56.159330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:101688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.820 [2024-07-14 20:27:56.159346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:40.820 [2024-07-14 20:27:56.159371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:100928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.820 [2024-07-14 20:27:56.159385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:40.820 [2024-07-14 20:27:56.159403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:101696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.820 [2024-07-14 20:27:56.159416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:40.820 [2024-07-14 20:27:56.159434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:101704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.820 [2024-07-14 20:27:56.159447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:40.820 [2024-07-14 20:27:56.160113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:101712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.820 [2024-07-14 20:27:56.160139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:40.820 [2024-07-14 20:27:56.160164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:101720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.820 [2024-07-14 20:27:56.160180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:40.820 [2024-07-14 20:27:56.160200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:101728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.820 [2024-07-14 20:27:56.160214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:40.820 [2024-07-14 20:27:56.160264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:101736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.820 [2024-07-14 20:27:56.160277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:40.820 [2024-07-14 20:27:56.160295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:101744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.820 [2024-07-14 20:27:56.160308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:40.820 [2024-07-14 20:27:56.160326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:101752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.820 [2024-07-14 20:27:56.160339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:40.820 [2024-07-14 20:27:56.160357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:101760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.820 [2024-07-14 20:27:56.160370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:40.820 [2024-07-14 20:27:56.160387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:101768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.820 [2024-07-14 20:27:56.160400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:40.820 [2024-07-14 20:27:56.160418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:101776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.820 [2024-07-14 20:27:56.160431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:40.820 [2024-07-14 20:27:56.160449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:101784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.820 [2024-07-14 20:27:56.160471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:40.820 [2024-07-14 20:27:56.160490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:101792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.820 [2024-07-14 20:27:56.160503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:40.820 [2024-07-14 20:27:56.160522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.820 [2024-07-14 20:27:56.160534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:40.820 [2024-07-14 20:27:56.160552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:101808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.820 [2024-07-14 20:27:56.160565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:40.820 [2024-07-14 20:27:56.160583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:101816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.820 [2024-07-14 20:27:56.160595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:40.820 [2024-07-14 20:27:56.160613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:101824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.820 [2024-07-14 20:27:56.160626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:40.820 [2024-07-14 20:27:56.160644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:101832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.820 [2024-07-14 20:27:56.160656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:40.820 [2024-07-14 20:27:56.160674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:101840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.820 [2024-07-14 20:27:56.160687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:40.820 [2024-07-14 20:27:56.160705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:101848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.820 [2024-07-14 20:27:56.160717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:40.820 [2024-07-14 20:27:56.160735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:101856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.820 [2024-07-14 20:27:56.160748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:40.820 [2024-07-14 20:27:56.160766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:101864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.820 [2024-07-14 20:27:56.160779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:40.820 [2024-07-14 20:27:56.160797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:101872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.820 [2024-07-14 20:27:56.160810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:40.820 [2024-07-14 20:27:56.160828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:101880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.820 [2024-07-14 20:27:56.160846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:40.820 [2024-07-14 20:27:56.160882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:101888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.820 [2024-07-14 20:27:56.160924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:40.820 [2024-07-14 20:27:56.160949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:101896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.820 [2024-07-14 20:27:56.160963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:40.820 [2024-07-14 20:27:56.160984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:101904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.820 [2024-07-14 20:27:56.160998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.820 [2024-07-14 20:27:56.161019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:101912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.820 [2024-07-14 20:27:56.161033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:40.820 [2024-07-14 20:27:56.161054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:101920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.820 [2024-07-14 20:27:56.161068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:40.820 [2024-07-14 20:27:56.161089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:101928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.820 [2024-07-14 20:27:56.161104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:40.820 [2024-07-14 20:27:56.161124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:101936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.820 [2024-07-14 20:27:56.161144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:40.820 [2024-07-14 20:27:56.161165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:101944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.820 [2024-07-14 20:27:56.161179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:40.820 [2024-07-14 20:27:56.161200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:100936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.820 [2024-07-14 20:27:56.161214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:40.820 [2024-07-14 20:27:56.161279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:100944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.821 [2024-07-14 20:27:56.161291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:40.821 [2024-07-14 20:27:56.161309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:100952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.821 [2024-07-14 20:27:56.161322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:40.821 [2024-07-14 20:27:56.166898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:100960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.821 [2024-07-14 20:27:56.166977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:40.821 [2024-07-14 20:27:56.167019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:100968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.821 [2024-07-14 20:27:56.167036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:40.821 [2024-07-14 20:27:56.167058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.821 [2024-07-14 20:27:56.167073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:40.821 [2024-07-14 20:27:56.167094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:100984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.821 [2024-07-14 20:27:56.167109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:40.821 [2024-07-14 20:27:56.167130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:100992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.821 [2024-07-14 20:27:56.167144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:40.821 [2024-07-14 20:27:56.167164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:101000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.821 [2024-07-14 20:27:56.167178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:40.821 [2024-07-14 20:27:56.167199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:101008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.821 [2024-07-14 20:27:56.167213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:40.821 [2024-07-14 20:27:56.167243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:101016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.821 [2024-07-14 20:27:56.167286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:40.821 [2024-07-14 20:27:56.167320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:101024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.821 [2024-07-14 20:27:56.167333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:40.821 [2024-07-14 20:27:56.167351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:101032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.821 [2024-07-14 20:27:56.167364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:40.821 [2024-07-14 20:27:56.167382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:101040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.821 [2024-07-14 20:27:56.167406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:40.821 [2024-07-14 20:27:56.167424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:101048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.821 [2024-07-14 20:27:56.167438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:40.821 [2024-07-14 20:27:56.167457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:101056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.821 [2024-07-14 20:27:56.167470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:40.821 [2024-07-14 20:27:56.167496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:101064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.821 [2024-07-14 20:27:56.167509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:40.821 [2024-07-14 20:27:56.167527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:101072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.821 [2024-07-14 20:27:56.167541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:40.821 [2024-07-14 20:27:56.167559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:101080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.821 [2024-07-14 20:27:56.167572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:40.821 [2024-07-14 20:27:56.167590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:101088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.821 [2024-07-14 20:27:56.167603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:40.821 [2024-07-14 20:27:56.167621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:101096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.821 [2024-07-14 20:27:56.167635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:40.821 [2024-07-14 20:27:56.167653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:101104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.821 [2024-07-14 20:27:56.167667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:40.821 [2024-07-14 20:27:56.167686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:101112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.821 [2024-07-14 20:27:56.167698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:40.821 [2024-07-14 20:27:56.167716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:101120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.821 [2024-07-14 20:27:56.167729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:40.821 [2024-07-14 20:27:56.167747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:101128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.821 [2024-07-14 20:27:56.167760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:40.821 [2024-07-14 20:27:56.167778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:101136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.821 [2024-07-14 20:27:56.167791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:40.821 [2024-07-14 20:27:56.167810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:101144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.821 [2024-07-14 20:27:56.167822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.821 [2024-07-14 20:27:56.167840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:101152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.821 [2024-07-14 20:27:56.167853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:40.821 [2024-07-14 20:27:56.167878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.821 [2024-07-14 20:27:56.167892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:40.821 [2024-07-14 20:27:56.167921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:101168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.821 [2024-07-14 20:27:56.167937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:40.821 [2024-07-14 20:27:56.167957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:101176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.821 [2024-07-14 20:27:56.167971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:40.821 [2024-07-14 20:27:56.167989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:101184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.821 [2024-07-14 20:27:56.168002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:40.821 [2024-07-14 20:27:56.168020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:101192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.821 [2024-07-14 20:27:56.168033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:40.821 [2024-07-14 20:27:56.168051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:101200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.821 [2024-07-14 20:27:56.168064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:40.821 [2024-07-14 20:27:56.168082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:101208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.821 [2024-07-14 20:27:56.168095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:40.821 [2024-07-14 20:27:56.168113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:101216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.821 [2024-07-14 20:27:56.168126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:40.821 [2024-07-14 20:27:56.168144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:101224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.821 [2024-07-14 20:27:56.168158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:40.821 [2024-07-14 20:27:56.168176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:101232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.821 [2024-07-14 20:27:56.168189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:40.821 [2024-07-14 20:27:56.168208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:101240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.821 [2024-07-14 20:27:56.168221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:40.821 [2024-07-14 20:27:56.168239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:101248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.821 [2024-07-14 20:27:56.168252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:40.821 [2024-07-14 20:27:56.168271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:101256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.821 [2024-07-14 20:27:56.168291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:40.821 [2024-07-14 20:27:56.168582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:101264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.821 [2024-07-14 20:27:56.168606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:40.821 [2024-07-14 20:27:56.168650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:101272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.821 [2024-07-14 20:27:56.168668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:40.821 [2024-07-14 20:27:56.168692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:101280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.822 [2024-07-14 20:27:56.168706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:40.822 [2024-07-14 20:27:56.168729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:101288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.822 [2024-07-14 20:27:56.168743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:40.822 [2024-07-14 20:27:56.168765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:101296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.822 [2024-07-14 20:27:56.168779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:40.822 [2024-07-14 20:27:56.168802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:101304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.822 [2024-07-14 20:27:56.168815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:40.822 [2024-07-14 20:27:56.168838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:101312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.822 [2024-07-14 20:27:56.168880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:40.822 [2024-07-14 20:27:56.168909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:101320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.822 [2024-07-14 20:27:56.168923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:40.822 [2024-07-14 20:27:56.168946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:101328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.822 [2024-07-14 20:27:56.168960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:40.822 [2024-07-14 20:27:56.168984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:101336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.822 [2024-07-14 20:27:56.168997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:40.822 [2024-07-14 20:27:56.169021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:101344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.822 [2024-07-14 20:27:56.169034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:40.822 [2024-07-14 20:27:56.169058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:101352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.822 [2024-07-14 20:27:56.169083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:40.822 [2024-07-14 20:27:56.169109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:101360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.822 [2024-07-14 20:27:56.169123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:40.822 [2024-07-14 20:27:56.169148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:101368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.822 [2024-07-14 20:27:56.169161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:40.822 [2024-07-14 20:27:56.169185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:101376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.822 [2024-07-14 20:27:56.169199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:40.822 [2024-07-14 20:27:56.169222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:101384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.822 [2024-07-14 20:27:56.169236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:40.822 [2024-07-14 20:27:56.169274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:101392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.822 [2024-07-14 20:27:56.169287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:40.822 [2024-07-14 20:27:56.169310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:101400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.822 [2024-07-14 20:27:56.169323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.822 [2024-07-14 20:27:56.169346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:101408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.822 [2024-07-14 20:27:56.169359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:40.822 [2024-07-14 20:27:56.169382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.822 [2024-07-14 20:27:56.169395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:40.822 [2024-07-14 20:27:56.169418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.822 [2024-07-14 20:27:56.169431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:40.822 [2024-07-14 20:27:56.169454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:101432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.822 [2024-07-14 20:27:56.169467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:40.822 [2024-07-14 20:27:56.169490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:101440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.822 [2024-07-14 20:27:56.169503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:40.822 [2024-07-14 20:27:56.169526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:101448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.822 [2024-07-14 20:27:56.169539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:40.822 [2024-07-14 20:27:56.169569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:101456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.822 [2024-07-14 20:27:56.169583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:40.822 [2024-07-14 20:27:56.169606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.822 [2024-07-14 20:27:56.169619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:40.822 [2024-07-14 20:27:56.169643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:101472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.822 [2024-07-14 20:27:56.169656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:40.822 [2024-07-14 20:27:56.169678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:101480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.822 [2024-07-14 20:27:56.169692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:40.822 [2024-07-14 20:27:56.169716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:101488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.822 [2024-07-14 20:27:56.169729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:40.822 [2024-07-14 20:27:56.169753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:101496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.822 [2024-07-14 20:27:56.169767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:40.822 [2024-07-14 20:27:56.169789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:101504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.822 [2024-07-14 20:27:56.169802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:40.822 [2024-07-14 20:27:56.169825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:101512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.822 [2024-07-14 20:27:56.169839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:40.822 [2024-07-14 20:27:56.169862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:101520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.822 [2024-07-14 20:27:56.169888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:40.822 [2024-07-14 20:27:56.169913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:101528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.822 [2024-07-14 20:27:56.169927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:40.822 [2024-07-14 20:27:56.169950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:101536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.822 [2024-07-14 20:27:56.169963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:40.822 [2024-07-14 20:27:56.169987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:101544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.822 [2024-07-14 20:27:56.170000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:40.822 [2024-07-14 20:27:56.170033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:101552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.822 [2024-07-14 20:27:56.170047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:40.822 [2024-07-14 20:27:56.170070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:101560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.822 [2024-07-14 20:27:56.170084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:40.822 [2024-07-14 20:27:56.170107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:101568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.823 [2024-07-14 20:27:56.170120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:40.823 [2024-07-14 20:27:56.170143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:101576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.823 [2024-07-14 20:27:56.170155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:40.823 [2024-07-14 20:27:56.170179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:101584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.823 [2024-07-14 20:27:56.170192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:40.823 [2024-07-14 20:27:56.170214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:101592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.823 [2024-07-14 20:27:56.170227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:40.823 [2024-07-14 20:27:56.170250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:101600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.823 [2024-07-14 20:27:56.170263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:40.823 [2024-07-14 20:27:56.170286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.823 [2024-07-14 20:27:56.170300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:40.823 [2024-07-14 20:27:56.170323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:101616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.823 [2024-07-14 20:27:56.170336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:40.823 [2024-07-14 20:27:56.170360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:101624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.823 [2024-07-14 20:27:56.170374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:40.823 [2024-07-14 20:27:56.170397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:101632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.823 [2024-07-14 20:27:56.170410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:40.823 [2024-07-14 20:27:56.170433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:101640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.823 [2024-07-14 20:27:56.170446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:40.823 [2024-07-14 20:27:56.170469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.823 [2024-07-14 20:27:56.170487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.823 [2024-07-14 20:27:56.170527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:101656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.823 [2024-07-14 20:27:56.170542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.823 [2024-07-14 20:27:56.170566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:101664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.823 [2024-07-14 20:27:56.170580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:40.823 [2024-07-14 20:27:56.170603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:101672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.823 [2024-07-14 20:27:56.170617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:40.823 [2024-07-14 20:27:56.170641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:101680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.823 [2024-07-14 20:27:56.170654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:40.823 [2024-07-14 20:27:56.170678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:101688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.823 [2024-07-14 20:27:56.170691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:40.823 [2024-07-14 20:27:56.170715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:100928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.823 [2024-07-14 20:27:56.170729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:40.823 [2024-07-14 20:27:56.170753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:101696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.823 [2024-07-14 20:27:56.170767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:40.823 [2024-07-14 20:27:56.170918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:101704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.823 [2024-07-14 20:27:56.170989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:40.823 [2024-07-14 20:28:09.462975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:31808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.823 [2024-07-14 20:28:09.463030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:40.823 [2024-07-14 20:28:09.463086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:31816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.823 [2024-07-14 20:28:09.463106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:40.823 [2024-07-14 20:28:09.463128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:31824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.823 [2024-07-14 20:28:09.463150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:40.823 [2024-07-14 20:28:09.463171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:31832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.823 [2024-07-14 20:28:09.463211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:40.823 [2024-07-14 20:28:09.463235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:31840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.823 [2024-07-14 20:28:09.463265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:40.823 [2024-07-14 20:28:09.463300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:31848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.823 [2024-07-14 20:28:09.463314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:40.823 [2024-07-14 20:28:09.463334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.823 [2024-07-14 20:28:09.463348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:40.823 [2024-07-14 20:28:09.463368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:31864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.823 [2024-07-14 20:28:09.463381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:40.823 [2024-07-14 20:28:09.463401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:30912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.823 [2024-07-14 20:28:09.463415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:40.823 [2024-07-14 20:28:09.463435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:30920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.823 [2024-07-14 20:28:09.463450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.823 [2024-07-14 20:28:09.463486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:30928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.823 [2024-07-14 20:28:09.463501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:40.823 [2024-07-14 20:28:09.463521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:30936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.823 [2024-07-14 20:28:09.463536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:40.823 [2024-07-14 20:28:09.463556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:30944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.823 [2024-07-14 20:28:09.463570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:40.823 [2024-07-14 20:28:09.463590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:30952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.823 [2024-07-14 20:28:09.463604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:40.823 [2024-07-14 20:28:09.463624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:30960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.823 [2024-07-14 20:28:09.463638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:40.823 [2024-07-14 20:28:09.464060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:30856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.823 [2024-07-14 20:28:09.464087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.823 [2024-07-14 20:28:09.464119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:30864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.823 [2024-07-14 20:28:09.464136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.823 [2024-07-14 20:28:09.464152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:30872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.823 [2024-07-14 20:28:09.464165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.823 [2024-07-14 20:28:09.464181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:30880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.823 [2024-07-14 20:28:09.464194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.823 [2024-07-14 20:28:09.464225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:30888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.823 [2024-07-14 20:28:09.464253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.823 [2024-07-14 20:28:09.464283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:30896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.823 [2024-07-14 20:28:09.464312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.823 [2024-07-14 20:28:09.464326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.823 [2024-07-14 20:28:09.464339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.823 [2024-07-14 20:28:09.464353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:31872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.823 [2024-07-14 20:28:09.464366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.824 [2024-07-14 20:28:09.464397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:30968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.824 [2024-07-14 20:28:09.464410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.824 [2024-07-14 20:28:09.464425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:30976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.824 [2024-07-14 20:28:09.464438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.824 [2024-07-14 20:28:09.464453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:30984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.824 [2024-07-14 20:28:09.464466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.824 [2024-07-14 20:28:09.464480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:30992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.824 [2024-07-14 20:28:09.464495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.824 [2024-07-14 20:28:09.464510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:31000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.824 [2024-07-14 20:28:09.464523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.824 [2024-07-14 20:28:09.464538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:31008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.824 [2024-07-14 20:28:09.464559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.824 [2024-07-14 20:28:09.464585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:31016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.824 [2024-07-14 20:28:09.464598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.824 [2024-07-14 20:28:09.464613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:31024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.824 [2024-07-14 20:28:09.464625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.824 [2024-07-14 20:28:09.464639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:31032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.824 [2024-07-14 20:28:09.464652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.824 [2024-07-14 20:28:09.464666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:31040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.824 [2024-07-14 20:28:09.464680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.824 [2024-07-14 20:28:09.464694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:31048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.824 [2024-07-14 20:28:09.464707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.824 [2024-07-14 20:28:09.464721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:31056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.824 [2024-07-14 20:28:09.464734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.824 [2024-07-14 20:28:09.464748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:31064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.824 [2024-07-14 20:28:09.464761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.824 [2024-07-14 20:28:09.464775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:31072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.824 [2024-07-14 20:28:09.464788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.824 [2024-07-14 20:28:09.464802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:31080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.824 [2024-07-14 20:28:09.464815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.824 [2024-07-14 20:28:09.464836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:31088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.824 [2024-07-14 20:28:09.464849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.824 [2024-07-14 20:28:09.464879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:31096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.824 [2024-07-14 20:28:09.464904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.824 [2024-07-14 20:28:09.464920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:31104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.824 [2024-07-14 20:28:09.464933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.824 [2024-07-14 20:28:09.464953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:31112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.824 [2024-07-14 20:28:09.464979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.824 [2024-07-14 20:28:09.464998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:31120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.824 [2024-07-14 20:28:09.465012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.824 [2024-07-14 20:28:09.465028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.824 [2024-07-14 20:28:09.465041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.824 [2024-07-14 20:28:09.465056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:31136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.824 [2024-07-14 20:28:09.465069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.824 [2024-07-14 20:28:09.465084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:31144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.824 [2024-07-14 20:28:09.465098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.824 [2024-07-14 20:28:09.465113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:31152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.824 [2024-07-14 20:28:09.465126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.824 [2024-07-14 20:28:09.465142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:31160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.824 [2024-07-14 20:28:09.465156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.824 [2024-07-14 20:28:09.465171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:31168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.824 [2024-07-14 20:28:09.465184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.824 [2024-07-14 20:28:09.465200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:31176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.824 [2024-07-14 20:28:09.465228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.824 [2024-07-14 20:28:09.465257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:31184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.824 [2024-07-14 20:28:09.465270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.824 [2024-07-14 20:28:09.465284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:31192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.824 [2024-07-14 20:28:09.465297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.824 [2024-07-14 20:28:09.465311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:31200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.824 [2024-07-14 20:28:09.465323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.824 [2024-07-14 20:28:09.465338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:31208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.824 [2024-07-14 20:28:09.465357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.824 [2024-07-14 20:28:09.465372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:31216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.824 [2024-07-14 20:28:09.465385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.824 [2024-07-14 20:28:09.465399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:31224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.824 [2024-07-14 20:28:09.465412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.824 [2024-07-14 20:28:09.465426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:31232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.824 [2024-07-14 20:28:09.465439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.824 [2024-07-14 20:28:09.465453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:31240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.824 [2024-07-14 20:28:09.465465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.824 [2024-07-14 20:28:09.465479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:31248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.824 [2024-07-14 20:28:09.465492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.824 [2024-07-14 20:28:09.465507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:31256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.824 [2024-07-14 20:28:09.465520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.824 [2024-07-14 20:28:09.465533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:31264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.824 [2024-07-14 20:28:09.465546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.824 [2024-07-14 20:28:09.465560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:31272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.824 [2024-07-14 20:28:09.465573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.824 [2024-07-14 20:28:09.465588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:31280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.824 [2024-07-14 20:28:09.465600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.824 [2024-07-14 20:28:09.465627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:31288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.824 [2024-07-14 20:28:09.465640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.824 [2024-07-14 20:28:09.465655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:31296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.824 [2024-07-14 20:28:09.465667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.824 [2024-07-14 20:28:09.465681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:31304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.824 [2024-07-14 20:28:09.465693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.825 [2024-07-14 20:28:09.465713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:31312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.825 [2024-07-14 20:28:09.465727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.825 [2024-07-14 20:28:09.465741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:31320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.825 [2024-07-14 20:28:09.465753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.825 [2024-07-14 20:28:09.465767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:31328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.825 [2024-07-14 20:28:09.465780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.825 [2024-07-14 20:28:09.465794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:31336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.825 [2024-07-14 20:28:09.465806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.825 [2024-07-14 20:28:09.465820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:31344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.825 [2024-07-14 20:28:09.465833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.825 [2024-07-14 20:28:09.465847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:31352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.825 [2024-07-14 20:28:09.465860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.825 [2024-07-14 20:28:09.465907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:31360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.825 [2024-07-14 20:28:09.465933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.825 [2024-07-14 20:28:09.465949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:31368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.825 [2024-07-14 20:28:09.465963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.825 [2024-07-14 20:28:09.465978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:31376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.825 [2024-07-14 20:28:09.465992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.825 [2024-07-14 20:28:09.466007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:31384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.825 [2024-07-14 20:28:09.466020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.825 [2024-07-14 20:28:09.466037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:31392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.825 [2024-07-14 20:28:09.466050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.825 [2024-07-14 20:28:09.466065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:31400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.825 [2024-07-14 20:28:09.466085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.825 [2024-07-14 20:28:09.466099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:31408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.825 [2024-07-14 20:28:09.466113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.825 [2024-07-14 20:28:09.466141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:31416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.825 [2024-07-14 20:28:09.466171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.825 [2024-07-14 20:28:09.466186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:31424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.825 [2024-07-14 20:28:09.466199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.825 [2024-07-14 20:28:09.466214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:31432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.825 [2024-07-14 20:28:09.466228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.825 [2024-07-14 20:28:09.466257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:31440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.825 [2024-07-14 20:28:09.466286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.825 [2024-07-14 20:28:09.466300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:31448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.825 [2024-07-14 20:28:09.466313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.825 [2024-07-14 20:28:09.466327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:31456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.825 [2024-07-14 20:28:09.466340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.825 [2024-07-14 20:28:09.466354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:31464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.825 [2024-07-14 20:28:09.466367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.825 [2024-07-14 20:28:09.466381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:31472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.825 [2024-07-14 20:28:09.466393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.825 [2024-07-14 20:28:09.466407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:31480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.825 [2024-07-14 20:28:09.466419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.825 [2024-07-14 20:28:09.466433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:31488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.825 [2024-07-14 20:28:09.466445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.825 [2024-07-14 20:28:09.466459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:31496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.825 [2024-07-14 20:28:09.466471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.825 [2024-07-14 20:28:09.466484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:31504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.825 [2024-07-14 20:28:09.466498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.825 [2024-07-14 20:28:09.466511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:31512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.825 [2024-07-14 20:28:09.466530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.825 [2024-07-14 20:28:09.466544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:31520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.825 [2024-07-14 20:28:09.466557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.825 [2024-07-14 20:28:09.466571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:31528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.825 [2024-07-14 20:28:09.466584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.825 [2024-07-14 20:28:09.466598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:31536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.825 [2024-07-14 20:28:09.466610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.825 [2024-07-14 20:28:09.466624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:31544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.825 [2024-07-14 20:28:09.466637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.825 [2024-07-14 20:28:09.466650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.825 [2024-07-14 20:28:09.466663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.825 [2024-07-14 20:28:09.466676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.825 [2024-07-14 20:28:09.466689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.825 [2024-07-14 20:28:09.466703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.825 [2024-07-14 20:28:09.466716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.825 [2024-07-14 20:28:09.466729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:31576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.825 [2024-07-14 20:28:09.466741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.825 [2024-07-14 20:28:09.466755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:31584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.825 [2024-07-14 20:28:09.466768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.825 [2024-07-14 20:28:09.466782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:31592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.825 [2024-07-14 20:28:09.466794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.825 [2024-07-14 20:28:09.466808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:31600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.825 [2024-07-14 20:28:09.466820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.825 [2024-07-14 20:28:09.466834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:31608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.825 [2024-07-14 20:28:09.466847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.825 [2024-07-14 20:28:09.466882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:31616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.825 [2024-07-14 20:28:09.466913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.825 [2024-07-14 20:28:09.466959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:31624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.825 [2024-07-14 20:28:09.466982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.825 [2024-07-14 20:28:09.466997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:31632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.825 [2024-07-14 20:28:09.467011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.825 [2024-07-14 20:28:09.467027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:31640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.825 [2024-07-14 20:28:09.467041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.825 [2024-07-14 20:28:09.467057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:31648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.825 [2024-07-14 20:28:09.467070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.826 [2024-07-14 20:28:09.467085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.826 [2024-07-14 20:28:09.467099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.826 [2024-07-14 20:28:09.467114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:31664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.826 [2024-07-14 20:28:09.467128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.826 [2024-07-14 20:28:09.467143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:31672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.826 [2024-07-14 20:28:09.467156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.826 [2024-07-14 20:28:09.467171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.826 [2024-07-14 20:28:09.467184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.826 [2024-07-14 20:28:09.467200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:31688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.826 [2024-07-14 20:28:09.467213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.826 [2024-07-14 20:28:09.467228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:31696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.826 [2024-07-14 20:28:09.467242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.826 [2024-07-14 20:28:09.467272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:31704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.826 [2024-07-14 20:28:09.467300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.826 [2024-07-14 20:28:09.467331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 ns 20:28:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:40.826 id:1 lba:31712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.826 [2024-07-14 20:28:09.467350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.826 [2024-07-14 20:28:09.467364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:31720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.826 [2024-07-14 20:28:09.467386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.826 [2024-07-14 20:28:09.467400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:31728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.826 [2024-07-14 20:28:09.467413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.826 [2024-07-14 20:28:09.467427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:31736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.826 [2024-07-14 20:28:09.467439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.826 [2024-07-14 20:28:09.467453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:31744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.826 [2024-07-14 20:28:09.467465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.826 [2024-07-14 20:28:09.467479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:31752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.826 [2024-07-14 20:28:09.467492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.826 [2024-07-14 20:28:09.467505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:31760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.826 [2024-07-14 20:28:09.467518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.826 [2024-07-14 20:28:09.467532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:31768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.826 [2024-07-14 20:28:09.467544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.826 [2024-07-14 20:28:09.467559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:31776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.826 [2024-07-14 20:28:09.467571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.826 [2024-07-14 20:28:09.467585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:31784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.826 [2024-07-14 20:28:09.467597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.826 [2024-07-14 20:28:09.467611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:31792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.826 [2024-07-14 20:28:09.467624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.826 [2024-07-14 20:28:09.467637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:31800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.826 [2024-07-14 20:28:09.467650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.826 [2024-07-14 20:28:09.468304] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x14ab660 was disconnected and freed. reset controller. 00:28:40.826 [2024-07-14 20:28:09.468433] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:40.826 [2024-07-14 20:28:09.468468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.826 [2024-07-14 20:28:09.468485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:40.826 [2024-07-14 20:28:09.468498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.826 [2024-07-14 20:28:09.468511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:40.826 [2024-07-14 20:28:09.468523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.826 [2024-07-14 20:28:09.468536] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:40.826 [2024-07-14 20:28:09.468549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.826 [2024-07-14 20:28:09.468562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.826 [2024-07-14 20:28:09.468575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.826 [2024-07-14 20:28:09.468593] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148dde0 is same with the state(5) to be set 00:28:40.826 [2024-07-14 20:28:09.469680] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.826 [2024-07-14 20:28:09.469715] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148dde0 (9): Bad file descriptor 00:28:40.826 [2024-07-14 20:28:09.469812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.826 [2024-07-14 20:28:09.469838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148dde0 with addr=10.0.0.2, port=4421 00:28:40.826 [2024-07-14 20:28:09.469853] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148dde0 is same with the state(5) to be set 00:28:40.826 [2024-07-14 20:28:09.469908] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148dde0 (9): Bad file descriptor 00:28:40.826 [2024-07-14 20:28:09.469930] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.826 [2024-07-14 20:28:09.469948] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.826 [2024-07-14 20:28:09.469976] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.826 [2024-07-14 20:28:09.470001] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.826 [2024-07-14 20:28:09.470016] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.826 [2024-07-14 20:28:19.578881] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:40.826 Received shutdown signal, test time was about 55.154651 seconds 00:28:40.826 00:28:40.826 Latency(us) 00:28:40.826 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:40.826 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:28:40.826 Verification LBA range: start 0x0 length 0x4000 00:28:40.826 Nvme0n1 : 55.15 7959.31 31.09 0.00 0.00 16054.63 673.98 7076934.75 00:28:40.826 =================================================================================================================== 00:28:40.826 Total : 7959.31 31.09 0.00 0.00 16054.63 673.98 7076934.75 00:28:41.086 20:28:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:28:41.086 20:28:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:28:41.086 20:28:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:28:41.086 20:28:30 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:41.086 20:28:30 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@117 -- # sync 00:28:41.086 20:28:30 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:41.086 20:28:30 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@120 -- # set +e 00:28:41.086 20:28:30 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:41.086 20:28:30 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:41.086 rmmod nvme_tcp 00:28:41.086 rmmod nvme_fabrics 00:28:41.086 rmmod nvme_keyring 00:28:41.086 20:28:30 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:41.086 20:28:30 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@124 -- # set -e 00:28:41.086 20:28:30 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@125 -- # return 0 00:28:41.086 20:28:30 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@489 -- # '[' -n 113358 ']' 00:28:41.086 20:28:30 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@490 -- # killprocess 113358 00:28:41.086 20:28:30 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@946 -- # '[' -z 113358 ']' 00:28:41.086 20:28:30 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@950 -- # kill -0 113358 00:28:41.086 20:28:30 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@951 -- # uname 00:28:41.345 20:28:30 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:41.345 20:28:30 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 113358 00:28:41.345 20:28:30 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:41.345 killing process with pid 113358 00:28:41.345 20:28:30 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:41.345 20:28:30 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@964 -- # echo 'killing process with pid 113358' 00:28:41.345 20:28:30 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@965 -- # kill 113358 00:28:41.345 20:28:30 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@970 -- # wait 113358 00:28:41.604 20:28:30 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:41.604 20:28:30 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:41.604 20:28:30 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:41.604 20:28:30 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:41.604 20:28:30 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:41.605 20:28:30 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:41.605 20:28:30 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:41.605 20:28:30 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:41.605 20:28:30 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:28:41.605 00:28:41.605 real 1m1.326s 00:28:41.605 user 2m51.698s 00:28:41.605 sys 0m14.826s 00:28:41.605 20:28:30 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:41.605 ************************************ 00:28:41.605 20:28:30 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:28:41.605 END TEST nvmf_host_multipath 00:28:41.605 ************************************ 00:28:41.605 20:28:30 nvmf_tcp -- nvmf/nvmf.sh@118 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:28:41.605 20:28:30 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:41.605 20:28:30 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:41.605 20:28:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:41.605 ************************************ 00:28:41.605 START TEST nvmf_timeout 00:28:41.605 ************************************ 00:28:41.605 20:28:30 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:28:41.864 * Looking for test storage... 00:28:41.864 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:28:41.864 20:28:30 nvmf_tcp.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:41.864 20:28:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:28:41.864 20:28:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:41.864 20:28:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:41.864 20:28:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:41.864 20:28:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:41.864 20:28:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:41.864 20:28:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:41.864 20:28:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:41.864 20:28:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:41.864 20:28:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:41.864 20:28:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:41.864 20:28:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:28:41.864 20:28:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:28:41.864 20:28:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:41.864 20:28:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:41.864 20:28:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:41.864 20:28:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:41.864 20:28:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:41.864 20:28:30 nvmf_tcp.nvmf_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:41.864 20:28:30 nvmf_tcp.nvmf_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:41.864 20:28:30 nvmf_tcp.nvmf_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:41.864 20:28:30 nvmf_tcp.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:41.864 20:28:30 nvmf_tcp.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:41.864 20:28:30 nvmf_tcp.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:41.864 20:28:30 nvmf_tcp.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:28:41.864 20:28:30 nvmf_tcp.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:41.864 20:28:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@47 -- # : 0 00:28:41.864 20:28:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:41.864 20:28:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:41.865 20:28:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:41.865 20:28:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:41.865 20:28:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:41.865 20:28:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:41.865 20:28:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:41.865 20:28:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:41.865 20:28:30 nvmf_tcp.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:41.865 20:28:30 nvmf_tcp.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:41.865 20:28:30 nvmf_tcp.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:41.865 20:28:30 nvmf_tcp.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:28:41.865 20:28:30 nvmf_tcp.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:41.865 20:28:30 nvmf_tcp.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:28:41.865 20:28:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:41.865 20:28:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:41.865 20:28:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:41.865 20:28:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:41.865 20:28:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:41.865 20:28:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:41.865 20:28:30 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:41.865 20:28:30 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:41.865 20:28:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:28:41.865 20:28:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:28:41.865 20:28:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:28:41.865 20:28:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:28:41.865 20:28:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:28:41.865 20:28:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@432 -- # nvmf_veth_init 00:28:41.865 20:28:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:41.865 20:28:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:41.865 20:28:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:28:41.865 20:28:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:28:41.865 20:28:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:41.865 20:28:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:41.865 20:28:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:41.865 20:28:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:41.865 20:28:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:41.865 20:28:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:41.865 20:28:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:41.865 20:28:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:41.865 20:28:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:28:41.865 20:28:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:28:41.865 Cannot find device "nvmf_tgt_br" 00:28:41.865 20:28:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # true 00:28:41.865 20:28:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:28:41.865 Cannot find device "nvmf_tgt_br2" 00:28:41.865 20:28:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # true 00:28:41.865 20:28:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:28:41.865 20:28:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:28:41.865 Cannot find device "nvmf_tgt_br" 00:28:41.865 20:28:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # true 00:28:41.865 20:28:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:28:41.865 Cannot find device "nvmf_tgt_br2" 00:28:41.865 20:28:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # true 00:28:41.865 20:28:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:28:41.865 20:28:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:28:41.865 20:28:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:41.865 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:41.865 20:28:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:28:41.865 20:28:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:41.865 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:41.865 20:28:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:28:41.865 20:28:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:28:41.865 20:28:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:41.865 20:28:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:41.865 20:28:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:41.865 20:28:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:41.865 20:28:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:41.865 20:28:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:41.865 20:28:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:28:42.124 20:28:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:28:42.124 20:28:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:28:42.124 20:28:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:28:42.124 20:28:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:28:42.124 20:28:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:28:42.124 20:28:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:42.124 20:28:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:42.124 20:28:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:42.124 20:28:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:28:42.124 20:28:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:28:42.124 20:28:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:28:42.124 20:28:31 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:42.124 20:28:31 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:42.124 20:28:31 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:42.124 20:28:31 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:42.124 20:28:31 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:28:42.124 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:42.124 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:28:42.124 00:28:42.124 --- 10.0.0.2 ping statistics --- 00:28:42.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:42.124 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:28:42.124 20:28:31 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:28:42.124 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:42.124 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:28:42.124 00:28:42.124 --- 10.0.0.3 ping statistics --- 00:28:42.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:42.124 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:28:42.124 20:28:31 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:42.124 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:42.124 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:28:42.124 00:28:42.124 --- 10.0.0.1 ping statistics --- 00:28:42.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:42.124 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:28:42.124 20:28:31 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:42.124 20:28:31 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@433 -- # return 0 00:28:42.124 20:28:31 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:42.124 20:28:31 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:42.124 20:28:31 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:42.124 20:28:31 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:42.124 20:28:31 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:42.124 20:28:31 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:42.124 20:28:31 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:42.124 20:28:31 nvmf_tcp.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:28:42.124 20:28:31 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:42.124 20:28:31 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:42.124 20:28:31 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:42.124 20:28:31 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@481 -- # nvmfpid=114702 00:28:42.125 20:28:31 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:28:42.125 20:28:31 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@482 -- # waitforlisten 114702 00:28:42.125 20:28:31 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@827 -- # '[' -z 114702 ']' 00:28:42.125 20:28:31 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:42.125 20:28:31 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:42.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:42.125 20:28:31 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:42.125 20:28:31 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:42.125 20:28:31 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:42.125 [2024-07-14 20:28:31.139514] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:28:42.125 [2024-07-14 20:28:31.139608] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:42.384 [2024-07-14 20:28:31.275946] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:42.384 [2024-07-14 20:28:31.403718] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:42.384 [2024-07-14 20:28:31.403798] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:42.384 [2024-07-14 20:28:31.403808] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:42.384 [2024-07-14 20:28:31.403816] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:42.384 [2024-07-14 20:28:31.403823] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:42.384 [2024-07-14 20:28:31.404279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:42.384 [2024-07-14 20:28:31.404409] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:43.321 20:28:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:43.321 20:28:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@860 -- # return 0 00:28:43.321 20:28:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:43.321 20:28:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:43.321 20:28:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:43.321 20:28:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:43.321 20:28:32 nvmf_tcp.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:43.321 20:28:32 nvmf_tcp.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:43.321 [2024-07-14 20:28:32.392187] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:43.580 20:28:32 nvmf_tcp.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:28:43.839 Malloc0 00:28:43.839 20:28:32 nvmf_tcp.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:44.098 20:28:32 nvmf_tcp.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:44.359 20:28:33 nvmf_tcp.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:44.619 [2024-07-14 20:28:33.472093] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:44.619 20:28:33 nvmf_tcp.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:28:44.619 20:28:33 nvmf_tcp.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=114789 00:28:44.619 20:28:33 nvmf_tcp.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 114789 /var/tmp/bdevperf.sock 00:28:44.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:44.619 20:28:33 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@827 -- # '[' -z 114789 ']' 00:28:44.619 20:28:33 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:44.619 20:28:33 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:44.619 20:28:33 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:44.619 20:28:33 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:44.619 20:28:33 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:44.619 [2024-07-14 20:28:33.536050] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:28:44.619 [2024-07-14 20:28:33.536154] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114789 ] 00:28:44.619 [2024-07-14 20:28:33.672280] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:44.877 [2024-07-14 20:28:33.755250] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:44.878 20:28:33 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:44.878 20:28:33 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@860 -- # return 0 00:28:44.878 20:28:33 nvmf_tcp.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:28:45.136 20:28:34 nvmf_tcp.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:28:45.395 NVMe0n1 00:28:45.395 20:28:34 nvmf_tcp.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=114823 00:28:45.395 20:28:34 nvmf_tcp.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:45.395 20:28:34 nvmf_tcp.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:28:45.654 Running I/O for 10 seconds... 00:28:46.593 20:28:35 nvmf_tcp.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:46.593 [2024-07-14 20:28:35.615942] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35f40 is same with the state(5) to be set 00:28:46.593 [2024-07-14 20:28:35.616035] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35f40 is same with the state(5) to be set 00:28:46.593 [2024-07-14 20:28:35.616048] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35f40 is same with the state(5) to be set 00:28:46.593 [2024-07-14 20:28:35.616057] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35f40 is same with the state(5) to be set 00:28:46.593 [2024-07-14 20:28:35.616066] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35f40 is same with the state(5) to be set 00:28:46.593 [2024-07-14 20:28:35.616074] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35f40 is same with the state(5) to be set 00:28:46.593 [2024-07-14 20:28:35.616082] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35f40 is same with the state(5) to be set 00:28:46.593 [2024-07-14 20:28:35.616090] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35f40 is same with the state(5) to be set 00:28:46.593 [2024-07-14 20:28:35.616098] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35f40 is same with the state(5) to be set 00:28:46.593 [2024-07-14 20:28:35.616107] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35f40 is same with the state(5) to be set 00:28:46.593 [2024-07-14 20:28:35.616116] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35f40 is same with the state(5) to be set 00:28:46.593 [2024-07-14 20:28:35.616125] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35f40 is same with the state(5) to be set 00:28:46.593 [2024-07-14 20:28:35.616133] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35f40 is same with the state(5) to be set 00:28:46.593 [2024-07-14 20:28:35.616142] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35f40 is same with the state(5) to be set 00:28:46.593 [2024-07-14 20:28:35.616151] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35f40 is same with the state(5) to be set 00:28:46.593 [2024-07-14 20:28:35.616159] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35f40 is same with the state(5) to be set 00:28:46.593 [2024-07-14 20:28:35.616167] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35f40 is same with the state(5) to be set 00:28:46.593 [2024-07-14 20:28:35.616176] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35f40 is same with the state(5) to be set 00:28:46.593 [2024-07-14 20:28:35.616184] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35f40 is same with the state(5) to be set 00:28:46.593 [2024-07-14 20:28:35.616193] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35f40 is same with the state(5) to be set 00:28:46.593 [2024-07-14 20:28:35.616200] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35f40 is same with the state(5) to be set 00:28:46.593 [2024-07-14 20:28:35.616208] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35f40 is same with the state(5) to be set 00:28:46.593 [2024-07-14 20:28:35.616241] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35f40 is same with the state(5) to be set 00:28:46.593 [2024-07-14 20:28:35.616250] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35f40 is same with the state(5) to be set 00:28:46.593 [2024-07-14 20:28:35.616267] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35f40 is same with the state(5) to be set 00:28:46.593 [2024-07-14 20:28:35.616274] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35f40 is same with the state(5) to be set 00:28:46.593 [2024-07-14 20:28:35.616290] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35f40 is same with the state(5) to be set 00:28:46.593 [2024-07-14 20:28:35.616297] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35f40 is same with the state(5) to be set 00:28:46.593 [2024-07-14 20:28:35.616305] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35f40 is same with the state(5) to be set 00:28:46.593 [2024-07-14 20:28:35.616312] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35f40 is same with the state(5) to be set 00:28:46.593 [2024-07-14 20:28:35.616320] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35f40 is same with the state(5) to be set 00:28:46.593 [2024-07-14 20:28:35.616327] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35f40 is same with the state(5) to be set 00:28:46.593 [2024-07-14 20:28:35.616335] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35f40 is same with the state(5) to be set 00:28:46.593 [2024-07-14 20:28:35.616342] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35f40 is same with the state(5) to be set 00:28:46.593 [2024-07-14 20:28:35.616350] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35f40 is same with the state(5) to be set 00:28:46.593 [2024-07-14 20:28:35.616375] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35f40 is same with the state(5) to be set 00:28:46.593 [2024-07-14 20:28:35.616398] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35f40 is same with the state(5) to be set 00:28:46.593 [2024-07-14 20:28:35.616406] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35f40 is same with the state(5) to be set 00:28:46.593 [2024-07-14 20:28:35.616414] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35f40 is same with the state(5) to be set 00:28:46.593 [2024-07-14 20:28:35.616422] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35f40 is same with the state(5) to be set 00:28:46.593 [2024-07-14 20:28:35.616429] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35f40 is same with the state(5) to be set 00:28:46.593 [2024-07-14 20:28:35.616441] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35f40 is same with the state(5) to be set 00:28:46.593 [2024-07-14 20:28:35.616449] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35f40 is same with the state(5) to be set 00:28:46.593 [2024-07-14 20:28:35.616457] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35f40 is same with the state(5) to be set 00:28:46.594 [2024-07-14 20:28:35.616465] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35f40 is same with the state(5) to be set 00:28:46.594 [2024-07-14 20:28:35.616474] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35f40 is same with the state(5) to be set 00:28:46.594 [2024-07-14 20:28:35.616488] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35f40 is same with the state(5) to be set 00:28:46.594 [2024-07-14 20:28:35.616497] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35f40 is same with the state(5) to be set 00:28:46.594 [2024-07-14 20:28:35.616505] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35f40 is same with the state(5) to be set 00:28:46.594 [2024-07-14 20:28:35.616513] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35f40 is same with the state(5) to be set 00:28:46.594 [2024-07-14 20:28:35.616521] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35f40 is same with the state(5) to be set 00:28:46.594 [2024-07-14 20:28:35.616529] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35f40 is same with the state(5) to be set 00:28:46.594 [2024-07-14 20:28:35.616537] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35f40 is same with the state(5) to be set 00:28:46.594 [2024-07-14 20:28:35.616545] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35f40 is same with the state(5) to be set 00:28:46.594 [2024-07-14 20:28:35.616553] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35f40 is same with the state(5) to be set 00:28:46.594 [2024-07-14 20:28:35.616560] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35f40 is same with the state(5) to be set 00:28:46.594 [2024-07-14 20:28:35.616568] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35f40 is same with the state(5) to be set 00:28:46.594 [2024-07-14 20:28:35.616576] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35f40 is same with the state(5) to be set 00:28:46.594 [2024-07-14 20:28:35.616584] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35f40 is same with the state(5) to be set 00:28:46.594 [2024-07-14 20:28:35.616591] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35f40 is same with the state(5) to be set 00:28:46.594 [2024-07-14 20:28:35.616599] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35f40 is same with the state(5) to be set 00:28:46.594 [2024-07-14 20:28:35.616607] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35f40 is same with the state(5) to be set 00:28:46.594 [2024-07-14 20:28:35.616614] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35f40 is same with the state(5) to be set 00:28:46.594 [2024-07-14 20:28:35.616623] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35f40 is same with the state(5) to be set 00:28:46.594 [2024-07-14 20:28:35.616631] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35f40 is same with the state(5) to be set 00:28:46.594 [2024-07-14 20:28:35.616639] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35f40 is same with the state(5) to be set 00:28:46.594 [2024-07-14 20:28:35.616647] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35f40 is same with the state(5) to be set 00:28:46.594 [2024-07-14 20:28:35.616655] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35f40 is same with the state(5) to be set 00:28:46.594 [2024-07-14 20:28:35.616663] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35f40 is same with the state(5) to be set 00:28:46.594 [2024-07-14 20:28:35.616672] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35f40 is same with the state(5) to be set 00:28:46.594 [2024-07-14 20:28:35.616680] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35f40 is same with the state(5) to be set 00:28:46.594 [2024-07-14 20:28:35.616688] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35f40 is same with the state(5) to be set 00:28:46.594 [2024-07-14 20:28:35.616696] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35f40 is same with the state(5) to be set 00:28:46.594 [2024-07-14 20:28:35.616705] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35f40 is same with the state(5) to be set 00:28:46.594 [2024-07-14 20:28:35.616713] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35f40 is same with the state(5) to be set 00:28:46.594 [2024-07-14 20:28:35.616721] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35f40 is same with the state(5) to be set 00:28:46.594 [2024-07-14 20:28:35.616729] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35f40 is same with the state(5) to be set 00:28:46.594 [2024-07-14 20:28:35.616737] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35f40 is same with the state(5) to be set 00:28:46.594 [2024-07-14 20:28:35.616745] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35f40 is same with the state(5) to be set 00:28:46.594 [2024-07-14 20:28:35.616753] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35f40 is same with the state(5) to be set 00:28:46.594 [2024-07-14 20:28:35.616761] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35f40 is same with the state(5) to be set 00:28:46.594 [2024-07-14 20:28:35.616769] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35f40 is same with the state(5) to be set 00:28:46.594 [2024-07-14 20:28:35.616776] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35f40 is same with the state(5) to be set 00:28:46.594 [2024-07-14 20:28:35.616784] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35f40 is same with the state(5) to be set 00:28:46.594 [2024-07-14 20:28:35.616792] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35f40 is same with the state(5) to be set 00:28:46.594 [2024-07-14 20:28:35.616800] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35f40 is same with the state(5) to be set 00:28:46.594 [2024-07-14 20:28:35.616808] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35f40 is same with the state(5) to be set 00:28:46.594 [2024-07-14 20:28:35.616816] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35f40 is same with the state(5) to be set 00:28:46.594 [2024-07-14 20:28:35.616824] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35f40 is same with the state(5) to be set 00:28:46.594 [2024-07-14 20:28:35.616831] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35f40 is same with the state(5) to be set 00:28:46.594 [2024-07-14 20:28:35.616839] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35f40 is same with the state(5) to be set 00:28:46.594 [2024-07-14 20:28:35.616847] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35f40 is same with the state(5) to be set 00:28:46.594 [2024-07-14 20:28:35.616855] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35f40 is same with the state(5) to be set 00:28:46.594 [2024-07-14 20:28:35.616863] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35f40 is same with the state(5) to be set 00:28:46.594 [2024-07-14 20:28:35.616879] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35f40 is same with the state(5) to be set 00:28:46.594 [2024-07-14 20:28:35.616887] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35f40 is same with the state(5) to be set 00:28:46.594 [2024-07-14 20:28:35.616912] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35f40 is same with the state(5) to be set 00:28:46.594 [2024-07-14 20:28:35.616933] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35f40 is same with the state(5) to be set 00:28:46.594 [2024-07-14 20:28:35.618363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:87408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.594 [2024-07-14 20:28:35.618421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.594 [2024-07-14 20:28:35.618443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:87416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.594 [2024-07-14 20:28:35.618455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.594 [2024-07-14 20:28:35.618466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:87424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.594 [2024-07-14 20:28:35.618475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.594 [2024-07-14 20:28:35.618485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:87432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.594 [2024-07-14 20:28:35.618493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.594 [2024-07-14 20:28:35.618504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:87440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.594 [2024-07-14 20:28:35.618513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.594 [2024-07-14 20:28:35.618522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:87448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.594 [2024-07-14 20:28:35.618530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.594 [2024-07-14 20:28:35.618540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:87456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.594 [2024-07-14 20:28:35.618548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.594 [2024-07-14 20:28:35.618558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:87464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.594 [2024-07-14 20:28:35.618566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.594 [2024-07-14 20:28:35.618575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:87472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.594 [2024-07-14 20:28:35.618583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.594 [2024-07-14 20:28:35.618593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:87480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.594 [2024-07-14 20:28:35.618601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.594 [2024-07-14 20:28:35.618611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:87488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.594 [2024-07-14 20:28:35.618619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.594 [2024-07-14 20:28:35.618628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:87496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.594 [2024-07-14 20:28:35.618636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.594 [2024-07-14 20:28:35.618646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:87504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.594 [2024-07-14 20:28:35.618654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.594 [2024-07-14 20:28:35.618663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:87512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.594 [2024-07-14 20:28:35.618678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.594 [2024-07-14 20:28:35.618688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:87520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.594 [2024-07-14 20:28:35.618704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.594 [2024-07-14 20:28:35.618714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:87528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.594 [2024-07-14 20:28:35.618721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.594 [2024-07-14 20:28:35.618731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:87536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.594 [2024-07-14 20:28:35.618739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.595 [2024-07-14 20:28:35.618749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:87544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.595 [2024-07-14 20:28:35.618759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.595 [2024-07-14 20:28:35.618769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:87552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.595 [2024-07-14 20:28:35.618777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.595 [2024-07-14 20:28:35.618786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:87560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.595 [2024-07-14 20:28:35.618796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.595 [2024-07-14 20:28:35.618806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:87568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.595 [2024-07-14 20:28:35.618814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.595 [2024-07-14 20:28:35.618824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:87576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.595 [2024-07-14 20:28:35.618831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.595 [2024-07-14 20:28:35.618841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:87584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.595 [2024-07-14 20:28:35.618849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.595 [2024-07-14 20:28:35.618885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:87592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.595 [2024-07-14 20:28:35.618895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.595 [2024-07-14 20:28:35.618906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:87600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.595 [2024-07-14 20:28:35.618915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.595 [2024-07-14 20:28:35.618926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:87608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.595 [2024-07-14 20:28:35.618962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.595 [2024-07-14 20:28:35.618974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:87616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.595 [2024-07-14 20:28:35.618984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.595 [2024-07-14 20:28:35.618994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:87624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.595 [2024-07-14 20:28:35.619005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.595 [2024-07-14 20:28:35.619015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:87632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.595 [2024-07-14 20:28:35.619025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.595 [2024-07-14 20:28:35.619035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:87640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.595 [2024-07-14 20:28:35.619044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.595 [2024-07-14 20:28:35.619063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:87648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.595 [2024-07-14 20:28:35.619073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.595 [2024-07-14 20:28:35.619084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:87656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.595 [2024-07-14 20:28:35.619093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.595 [2024-07-14 20:28:35.619104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:87664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.595 [2024-07-14 20:28:35.619114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.595 [2024-07-14 20:28:35.619126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:87672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.595 [2024-07-14 20:28:35.619135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.595 [2024-07-14 20:28:35.619146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:87680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.595 [2024-07-14 20:28:35.619155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.595 [2024-07-14 20:28:35.619166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:87688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.595 [2024-07-14 20:28:35.619175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.595 [2024-07-14 20:28:35.619186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:87696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.595 [2024-07-14 20:28:35.619196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.595 [2024-07-14 20:28:35.619207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:87704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.595 [2024-07-14 20:28:35.619216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.595 [2024-07-14 20:28:35.619226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:87712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.595 [2024-07-14 20:28:35.619251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.595 [2024-07-14 20:28:35.619290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:87720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.595 [2024-07-14 20:28:35.619298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.595 [2024-07-14 20:28:35.619322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:87728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.595 [2024-07-14 20:28:35.619330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.595 [2024-07-14 20:28:35.619339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:87736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.595 [2024-07-14 20:28:35.619347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.595 [2024-07-14 20:28:35.619356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:87744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.595 [2024-07-14 20:28:35.619365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.595 [2024-07-14 20:28:35.619374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:87752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.595 [2024-07-14 20:28:35.619382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.595 [2024-07-14 20:28:35.619392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:87864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.595 [2024-07-14 20:28:35.619400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.595 [2024-07-14 20:28:35.619409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:87872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.595 [2024-07-14 20:28:35.619418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.595 [2024-07-14 20:28:35.619432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:87880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.595 [2024-07-14 20:28:35.619440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.595 [2024-07-14 20:28:35.619450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:87888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.595 [2024-07-14 20:28:35.619459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.595 [2024-07-14 20:28:35.619468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:87896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.595 [2024-07-14 20:28:35.619476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.595 [2024-07-14 20:28:35.619486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:87904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.595 [2024-07-14 20:28:35.619493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.595 [2024-07-14 20:28:35.619502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:87912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.595 [2024-07-14 20:28:35.619511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.595 [2024-07-14 20:28:35.619520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:87760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.595 [2024-07-14 20:28:35.619528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.595 [2024-07-14 20:28:35.619537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:87768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.595 [2024-07-14 20:28:35.619545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.595 [2024-07-14 20:28:35.619554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:87776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.595 [2024-07-14 20:28:35.619562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.595 [2024-07-14 20:28:35.619571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:87784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.595 [2024-07-14 20:28:35.619579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.595 [2024-07-14 20:28:35.619588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:87792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.595 [2024-07-14 20:28:35.619596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.595 [2024-07-14 20:28:35.619605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:87800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.595 [2024-07-14 20:28:35.619613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.595 [2024-07-14 20:28:35.619622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:87808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.595 [2024-07-14 20:28:35.619630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.595 [2024-07-14 20:28:35.619640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:87816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.595 [2024-07-14 20:28:35.619648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.595 [2024-07-14 20:28:35.619657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:87824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.595 [2024-07-14 20:28:35.619665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.596 [2024-07-14 20:28:35.619683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:87832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.596 [2024-07-14 20:28:35.619691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.596 [2024-07-14 20:28:35.619700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:87840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.596 [2024-07-14 20:28:35.619709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.596 [2024-07-14 20:28:35.619723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:87848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.596 [2024-07-14 20:28:35.619731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.596 [2024-07-14 20:28:35.619741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:87856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.596 [2024-07-14 20:28:35.619749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.596 [2024-07-14 20:28:35.619759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:87920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.596 [2024-07-14 20:28:35.619767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.596 [2024-07-14 20:28:35.619777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:87928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.596 [2024-07-14 20:28:35.619785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.596 [2024-07-14 20:28:35.619795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.596 [2024-07-14 20:28:35.619803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.596 [2024-07-14 20:28:35.619813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:87944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.596 [2024-07-14 20:28:35.619821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.596 [2024-07-14 20:28:35.619830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:87952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.596 [2024-07-14 20:28:35.619837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.596 [2024-07-14 20:28:35.619847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:87960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.596 [2024-07-14 20:28:35.619854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.596 [2024-07-14 20:28:35.619880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.596 [2024-07-14 20:28:35.619888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.596 [2024-07-14 20:28:35.619898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:87976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.596 [2024-07-14 20:28:35.619906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.596 [2024-07-14 20:28:35.619915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:87984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.596 [2024-07-14 20:28:35.619923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.596 [2024-07-14 20:28:35.619933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:87992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.596 [2024-07-14 20:28:35.619941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.596 [2024-07-14 20:28:35.619966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:88000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.596 [2024-07-14 20:28:35.619976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.596 [2024-07-14 20:28:35.619986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:88008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.596 [2024-07-14 20:28:35.619994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.596 [2024-07-14 20:28:35.620009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:88016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.596 [2024-07-14 20:28:35.620018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.596 [2024-07-14 20:28:35.620028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.596 [2024-07-14 20:28:35.620037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.596 [2024-07-14 20:28:35.620051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:88032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.596 [2024-07-14 20:28:35.620061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.596 [2024-07-14 20:28:35.620071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.596 [2024-07-14 20:28:35.620080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.596 [2024-07-14 20:28:35.620090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:88048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.596 [2024-07-14 20:28:35.620098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.596 [2024-07-14 20:28:35.620108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:88056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.596 [2024-07-14 20:28:35.620116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.596 [2024-07-14 20:28:35.620126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:88064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.596 [2024-07-14 20:28:35.620134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.596 [2024-07-14 20:28:35.620144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:88072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.596 [2024-07-14 20:28:35.620156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.596 [2024-07-14 20:28:35.620166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:88080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.596 [2024-07-14 20:28:35.620173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.596 [2024-07-14 20:28:35.620183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:88088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.596 [2024-07-14 20:28:35.620191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.596 [2024-07-14 20:28:35.620200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:88096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.596 [2024-07-14 20:28:35.620209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.596 [2024-07-14 20:28:35.620218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:88104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.596 [2024-07-14 20:28:35.620226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.596 [2024-07-14 20:28:35.620250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:88112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.596 [2024-07-14 20:28:35.620258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.596 [2024-07-14 20:28:35.620267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:88120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.596 [2024-07-14 20:28:35.620275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.596 [2024-07-14 20:28:35.620284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:88128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.596 [2024-07-14 20:28:35.620303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.596 [2024-07-14 20:28:35.620312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:88136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.596 [2024-07-14 20:28:35.620320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.596 [2024-07-14 20:28:35.620334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:88144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.596 [2024-07-14 20:28:35.620343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.596 [2024-07-14 20:28:35.620353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:88152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.596 [2024-07-14 20:28:35.620361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.596 [2024-07-14 20:28:35.620375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:88160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.596 [2024-07-14 20:28:35.620384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.596 [2024-07-14 20:28:35.620394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:88168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.596 [2024-07-14 20:28:35.620402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.596 [2024-07-14 20:28:35.620412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:88176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.596 [2024-07-14 20:28:35.620420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.596 [2024-07-14 20:28:35.620429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:88184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.596 [2024-07-14 20:28:35.620437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.596 [2024-07-14 20:28:35.620446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.596 [2024-07-14 20:28:35.620454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.596 [2024-07-14 20:28:35.620463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:88200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.596 [2024-07-14 20:28:35.620471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.596 [2024-07-14 20:28:35.620480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:88208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.596 [2024-07-14 20:28:35.620488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.596 [2024-07-14 20:28:35.620513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:88216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.596 [2024-07-14 20:28:35.620521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.596 [2024-07-14 20:28:35.620531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:88224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.596 [2024-07-14 20:28:35.620538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.597 [2024-07-14 20:28:35.620548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:88232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.597 [2024-07-14 20:28:35.620556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.597 [2024-07-14 20:28:35.620570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:88240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.597 [2024-07-14 20:28:35.620578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.597 [2024-07-14 20:28:35.620587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:88248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.597 [2024-07-14 20:28:35.620595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.597 [2024-07-14 20:28:35.620605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:88256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.597 [2024-07-14 20:28:35.620612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.597 [2024-07-14 20:28:35.620622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:88264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.597 [2024-07-14 20:28:35.620630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.597 [2024-07-14 20:28:35.620645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:88272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.597 [2024-07-14 20:28:35.620653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.597 [2024-07-14 20:28:35.620663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:88280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.597 [2024-07-14 20:28:35.620671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.597 [2024-07-14 20:28:35.620685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:88288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.597 [2024-07-14 20:28:35.620695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.597 [2024-07-14 20:28:35.620705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:88296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.597 [2024-07-14 20:28:35.620713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.597 [2024-07-14 20:28:35.620742] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:46.597 [2024-07-14 20:28:35.620753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88304 len:8 PRP1 0x0 PRP2 0x0 00:28:46.597 [2024-07-14 20:28:35.620762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.597 [2024-07-14 20:28:35.620773] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:46.597 [2024-07-14 20:28:35.620780] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:46.597 [2024-07-14 20:28:35.620787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88312 len:8 PRP1 0x0 PRP2 0x0 00:28:46.597 [2024-07-14 20:28:35.620798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.597 [2024-07-14 20:28:35.620807] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:46.597 [2024-07-14 20:28:35.620813] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:46.597 [2024-07-14 20:28:35.620820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88320 len:8 PRP1 0x0 PRP2 0x0 00:28:46.597 [2024-07-14 20:28:35.620828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.597 [2024-07-14 20:28:35.620837] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:46.597 [2024-07-14 20:28:35.620843] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:46.597 [2024-07-14 20:28:35.620850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88328 len:8 PRP1 0x0 PRP2 0x0 00:28:46.597 [2024-07-14 20:28:35.620858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.597 [2024-07-14 20:28:35.620866] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:46.597 [2024-07-14 20:28:35.620872] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:46.597 [2024-07-14 20:28:35.620879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88336 len:8 PRP1 0x0 PRP2 0x0 00:28:46.597 [2024-07-14 20:28:35.620896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.597 [2024-07-14 20:28:35.620905] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:46.597 [2024-07-14 20:28:35.620912] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:46.597 [2024-07-14 20:28:35.620919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88344 len:8 PRP1 0x0 PRP2 0x0 00:28:46.597 [2024-07-14 20:28:35.620942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.597 [2024-07-14 20:28:35.620950] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:46.597 [2024-07-14 20:28:35.620961] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:46.597 [2024-07-14 20:28:35.620968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88352 len:8 PRP1 0x0 PRP2 0x0 00:28:46.597 [2024-07-14 20:28:35.620976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.597 [2024-07-14 20:28:35.620984] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:46.597 [2024-07-14 20:28:35.620995] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:46.597 [2024-07-14 20:28:35.621002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88360 len:8 PRP1 0x0 PRP2 0x0 00:28:46.597 [2024-07-14 20:28:35.621009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.597 [2024-07-14 20:28:35.621017] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:46.597 [2024-07-14 20:28:35.621024] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:46.597 [2024-07-14 20:28:35.621030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88368 len:8 PRP1 0x0 PRP2 0x0 00:28:46.597 [2024-07-14 20:28:35.621038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.597 [2024-07-14 20:28:35.621047] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:46.597 [2024-07-14 20:28:35.621053] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:46.597 [2024-07-14 20:28:35.621060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88376 len:8 PRP1 0x0 PRP2 0x0 00:28:46.597 [2024-07-14 20:28:35.621067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.597 [2024-07-14 20:28:35.621075] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:46.597 [2024-07-14 20:28:35.621081] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:46.597 [2024-07-14 20:28:35.621088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88384 len:8 PRP1 0x0 PRP2 0x0 00:28:46.597 [2024-07-14 20:28:35.621096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.597 [2024-07-14 20:28:35.621103] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:46.597 [2024-07-14 20:28:35.621109] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:46.597 [2024-07-14 20:28:35.621116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88392 len:8 PRP1 0x0 PRP2 0x0 00:28:46.597 [2024-07-14 20:28:35.621123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.597 [2024-07-14 20:28:35.621131] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:46.597 [2024-07-14 20:28:35.621137] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:46.597 [2024-07-14 20:28:35.621144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88400 len:8 PRP1 0x0 PRP2 0x0 00:28:46.597 [2024-07-14 20:28:35.621151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.597 [2024-07-14 20:28:35.621159] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:46.597 [2024-07-14 20:28:35.621165] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:46.597 [2024-07-14 20:28:35.621171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88408 len:8 PRP1 0x0 PRP2 0x0 00:28:46.597 [2024-07-14 20:28:35.621179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.597 [2024-07-14 20:28:35.641628] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:46.597 [2024-07-14 20:28:35.641666] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:46.597 [2024-07-14 20:28:35.641681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88416 len:8 PRP1 0x0 PRP2 0x0 00:28:46.597 [2024-07-14 20:28:35.641696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.598 [2024-07-14 20:28:35.641710] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:46.598 [2024-07-14 20:28:35.641722] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:46.598 [2024-07-14 20:28:35.641733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88424 len:8 PRP1 0x0 PRP2 0x0 00:28:46.598 [2024-07-14 20:28:35.641745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.598 [2024-07-14 20:28:35.641828] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1840460 was disconnected and freed. reset controller. 00:28:46.598 [2024-07-14 20:28:35.641995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:46.598 [2024-07-14 20:28:35.642019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.598 [2024-07-14 20:28:35.642044] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:46.598 [2024-07-14 20:28:35.642057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.598 [2024-07-14 20:28:35.642070] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:46.598 [2024-07-14 20:28:35.642083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.598 [2024-07-14 20:28:35.642096] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:46.598 [2024-07-14 20:28:35.642109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.598 [2024-07-14 20:28:35.642121] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1822620 is same with the state(5) to be set 00:28:46.598 [2024-07-14 20:28:35.642463] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.598 [2024-07-14 20:28:35.642523] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1822620 (9): Bad file descriptor 00:28:46.598 [2024-07-14 20:28:35.642681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.598 [2024-07-14 20:28:35.642709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1822620 with addr=10.0.0.2, port=4420 00:28:46.598 [2024-07-14 20:28:35.642724] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1822620 is same with the state(5) to be set 00:28:46.598 [2024-07-14 20:28:35.642748] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1822620 (9): Bad file descriptor 00:28:46.598 [2024-07-14 20:28:35.642769] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.598 [2024-07-14 20:28:35.642782] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.598 [2024-07-14 20:28:35.642796] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.598 [2024-07-14 20:28:35.642823] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.598 [2024-07-14 20:28:35.642844] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.598 20:28:35 nvmf_tcp.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:28:49.134 [2024-07-14 20:28:37.643113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.134 [2024-07-14 20:28:37.643221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1822620 with addr=10.0.0.2, port=4420 00:28:49.134 [2024-07-14 20:28:37.643240] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1822620 is same with the state(5) to be set 00:28:49.134 [2024-07-14 20:28:37.643281] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1822620 (9): Bad file descriptor 00:28:49.134 [2024-07-14 20:28:37.643309] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.134 [2024-07-14 20:28:37.643328] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.134 [2024-07-14 20:28:37.643341] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.134 [2024-07-14 20:28:37.643383] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.134 [2024-07-14 20:28:37.643396] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.134 20:28:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:28:49.134 20:28:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:49.134 20:28:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:28:49.134 20:28:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:28:49.134 20:28:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:28:49.134 20:28:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:28:49.134 20:28:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:28:49.134 20:28:38 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:28:49.134 20:28:38 nvmf_tcp.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:28:51.037 [2024-07-14 20:28:39.643558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.037 [2024-07-14 20:28:39.643661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1822620 with addr=10.0.0.2, port=4420 00:28:51.037 [2024-07-14 20:28:39.643678] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1822620 is same with the state(5) to be set 00:28:51.037 [2024-07-14 20:28:39.643706] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1822620 (9): Bad file descriptor 00:28:51.037 [2024-07-14 20:28:39.643725] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.037 [2024-07-14 20:28:39.643735] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.037 [2024-07-14 20:28:39.643746] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.037 [2024-07-14 20:28:39.643774] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.037 [2024-07-14 20:28:39.643786] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.938 [2024-07-14 20:28:41.643820] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.938 [2024-07-14 20:28:41.643937] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.938 [2024-07-14 20:28:41.643965] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.938 [2024-07-14 20:28:41.643977] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:28:52.938 [2024-07-14 20:28:41.644009] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:53.871 00:28:53.871 Latency(us) 00:28:53.871 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:53.871 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:28:53.871 Verification LBA range: start 0x0 length 0x4000 00:28:53.871 NVMe0n1 : 8.15 1340.19 5.24 15.70 0.00 94361.73 2085.24 7046430.72 00:28:53.871 =================================================================================================================== 00:28:53.871 Total : 1340.19 5.24 15.70 0.00 94361.73 2085.24 7046430.72 00:28:53.871 0 00:28:54.435 20:28:43 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:28:54.435 20:28:43 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:54.435 20:28:43 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:28:54.435 20:28:43 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:28:54.435 20:28:43 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:28:54.435 20:28:43 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:28:54.435 20:28:43 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:28:54.692 20:28:43 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:28:54.692 20:28:43 nvmf_tcp.nvmf_timeout -- host/timeout.sh@65 -- # wait 114823 00:28:54.692 20:28:43 nvmf_tcp.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 114789 00:28:54.692 20:28:43 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@946 -- # '[' -z 114789 ']' 00:28:54.692 20:28:43 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@950 -- # kill -0 114789 00:28:54.692 20:28:43 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # uname 00:28:54.692 20:28:43 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:54.692 20:28:43 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 114789 00:28:54.692 killing process with pid 114789 00:28:54.692 Received shutdown signal, test time was about 9.279115 seconds 00:28:54.692 00:28:54.692 Latency(us) 00:28:54.692 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:54.692 =================================================================================================================== 00:28:54.692 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:54.692 20:28:43 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:28:54.692 20:28:43 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:28:54.692 20:28:43 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@964 -- # echo 'killing process with pid 114789' 00:28:54.692 20:28:43 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@965 -- # kill 114789 00:28:54.692 20:28:43 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@970 -- # wait 114789 00:28:54.949 20:28:43 nvmf_tcp.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:55.207 [2024-07-14 20:28:44.161226] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:55.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:55.207 20:28:44 nvmf_tcp.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=114975 00:28:55.207 20:28:44 nvmf_tcp.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:28:55.207 20:28:44 nvmf_tcp.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 114975 /var/tmp/bdevperf.sock 00:28:55.207 20:28:44 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@827 -- # '[' -z 114975 ']' 00:28:55.207 20:28:44 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:55.207 20:28:44 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:55.207 20:28:44 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:55.207 20:28:44 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:55.207 20:28:44 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:55.207 [2024-07-14 20:28:44.226681] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:28:55.207 [2024-07-14 20:28:44.226772] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114975 ] 00:28:55.463 [2024-07-14 20:28:44.359708] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:55.464 [2024-07-14 20:28:44.443584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:56.394 20:28:45 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:56.394 20:28:45 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@860 -- # return 0 00:28:56.394 20:28:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:28:56.394 20:28:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:28:56.652 NVMe0n1 00:28:56.653 20:28:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=115018 00:28:56.653 20:28:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:56.653 20:28:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:28:56.911 Running I/O for 10 seconds... 00:28:57.849 20:28:46 nvmf_tcp.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:57.849 [2024-07-14 20:28:46.920539] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddf830 is same with the state(5) to be set 00:28:57.849 [2024-07-14 20:28:46.920612] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddf830 is same with the state(5) to be set 00:28:57.849 [2024-07-14 20:28:46.920623] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddf830 is same with the state(5) to be set 00:28:57.849 [2024-07-14 20:28:46.920638] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddf830 is same with the state(5) to be set 00:28:57.849 [2024-07-14 20:28:46.920646] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddf830 is same with the state(5) to be set 00:28:57.849 [2024-07-14 20:28:46.920654] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddf830 is same with the state(5) to be set 00:28:57.849 [2024-07-14 20:28:46.920662] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddf830 is same with the state(5) to be set 00:28:57.849 [2024-07-14 20:28:46.920670] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddf830 is same with the state(5) to be set 00:28:57.849 [2024-07-14 20:28:46.920677] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddf830 is same with the state(5) to be set 00:28:57.849 [2024-07-14 20:28:46.920685] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddf830 is same with the state(5) to be set 00:28:57.849 [2024-07-14 20:28:46.920693] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddf830 is same with the state(5) to be set 00:28:57.849 [2024-07-14 20:28:46.920701] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddf830 is same with the state(5) to be set 00:28:57.849 [2024-07-14 20:28:46.920708] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddf830 is same with the state(5) to be set 00:28:57.849 [2024-07-14 20:28:46.920716] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddf830 is same with the state(5) to be set 00:28:57.849 [2024-07-14 20:28:46.920724] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddf830 is same with the state(5) to be set 00:28:57.849 [2024-07-14 20:28:46.920731] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddf830 is same with the state(5) to be set 00:28:57.849 [2024-07-14 20:28:46.920738] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddf830 is same with the state(5) to be set 00:28:57.849 [2024-07-14 20:28:46.920746] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddf830 is same with the state(5) to be set 00:28:57.849 [2024-07-14 20:28:46.920753] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddf830 is same with the state(5) to be set 00:28:57.849 [2024-07-14 20:28:46.921822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:86840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.849 [2024-07-14 20:28:46.921917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.849 [2024-07-14 20:28:46.921942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:86848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.849 [2024-07-14 20:28:46.921968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.849 [2024-07-14 20:28:46.921980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:86856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.849 [2024-07-14 20:28:46.921989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.849 [2024-07-14 20:28:46.922000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:86864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.849 [2024-07-14 20:28:46.922009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.849 [2024-07-14 20:28:46.922020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:86872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.849 [2024-07-14 20:28:46.922029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.849 [2024-07-14 20:28:46.922040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:86880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.849 [2024-07-14 20:28:46.922050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.849 [2024-07-14 20:28:46.922061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:86888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.849 [2024-07-14 20:28:46.922070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.849 [2024-07-14 20:28:46.922081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:86896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.849 [2024-07-14 20:28:46.922090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.849 [2024-07-14 20:28:46.922100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:86904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.849 [2024-07-14 20:28:46.922109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.849 [2024-07-14 20:28:46.922120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:86912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.849 [2024-07-14 20:28:46.922129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.849 [2024-07-14 20:28:46.922139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:86920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.849 [2024-07-14 20:28:46.922148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.849 [2024-07-14 20:28:46.922158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:86928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.849 [2024-07-14 20:28:46.922176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.849 [2024-07-14 20:28:46.922186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:86936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.849 [2024-07-14 20:28:46.922195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.849 [2024-07-14 20:28:46.922205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:86944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.849 [2024-07-14 20:28:46.922228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.849 [2024-07-14 20:28:46.922239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:86952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.849 [2024-07-14 20:28:46.922247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.849 [2024-07-14 20:28:46.922258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:86960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.849 [2024-07-14 20:28:46.922267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.849 [2024-07-14 20:28:46.922277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:86968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.849 [2024-07-14 20:28:46.922302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.849 [2024-07-14 20:28:46.922312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:86976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.849 [2024-07-14 20:28:46.922321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.849 [2024-07-14 20:28:46.922331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:86984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.849 [2024-07-14 20:28:46.922340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.849 [2024-07-14 20:28:46.922350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:86992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.849 [2024-07-14 20:28:46.922359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.849 [2024-07-14 20:28:46.922368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:87000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.849 [2024-07-14 20:28:46.922377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.849 [2024-07-14 20:28:46.922388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:87008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.849 [2024-07-14 20:28:46.922396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.849 [2024-07-14 20:28:46.922406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:87016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.849 [2024-07-14 20:28:46.922413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.849 [2024-07-14 20:28:46.922423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:87024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.850 [2024-07-14 20:28:46.922432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.850 [2024-07-14 20:28:46.922444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:87032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.850 [2024-07-14 20:28:46.922453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.850 [2024-07-14 20:28:46.922463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:87040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.850 [2024-07-14 20:28:46.922471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.850 [2024-07-14 20:28:46.922481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:87048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.850 [2024-07-14 20:28:46.922490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.850 [2024-07-14 20:28:46.922500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:87056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.850 [2024-07-14 20:28:46.922508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.850 [2024-07-14 20:28:46.922517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:87064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.850 [2024-07-14 20:28:46.922525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.850 [2024-07-14 20:28:46.922535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:87072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.850 [2024-07-14 20:28:46.922543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.850 [2024-07-14 20:28:46.922553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:87080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.850 [2024-07-14 20:28:46.922561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.850 [2024-07-14 20:28:46.922571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:87088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.850 [2024-07-14 20:28:46.922579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.850 [2024-07-14 20:28:46.922606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:87096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.850 [2024-07-14 20:28:46.922615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.850 [2024-07-14 20:28:46.922626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:87104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.850 [2024-07-14 20:28:46.922635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.850 [2024-07-14 20:28:46.922645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:87112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.850 [2024-07-14 20:28:46.922653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.850 [2024-07-14 20:28:46.922663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:87120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.850 [2024-07-14 20:28:46.922671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.850 [2024-07-14 20:28:46.922681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:87128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.850 [2024-07-14 20:28:46.922691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.850 [2024-07-14 20:28:46.922702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:87136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.850 [2024-07-14 20:28:46.922710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.850 [2024-07-14 20:28:46.922721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.850 [2024-07-14 20:28:46.922730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.850 [2024-07-14 20:28:46.922741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:87152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.850 [2024-07-14 20:28:46.922750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.850 [2024-07-14 20:28:46.922760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:87160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.850 [2024-07-14 20:28:46.922769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.850 [2024-07-14 20:28:46.922780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:87168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.850 [2024-07-14 20:28:46.922788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.850 [2024-07-14 20:28:46.922798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.850 [2024-07-14 20:28:46.922807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.850 [2024-07-14 20:28:46.922817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:87184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.850 [2024-07-14 20:28:46.922825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.850 [2024-07-14 20:28:46.922835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:87192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.850 [2024-07-14 20:28:46.922843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.850 [2024-07-14 20:28:46.922853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:87200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.850 [2024-07-14 20:28:46.922862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.850 [2024-07-14 20:28:46.922871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.850 [2024-07-14 20:28:46.922880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.850 [2024-07-14 20:28:46.922892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:87216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.850 [2024-07-14 20:28:46.922901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.850 [2024-07-14 20:28:46.922919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:87224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.850 [2024-07-14 20:28:46.922975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.850 [2024-07-14 20:28:46.922988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:87232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.850 [2024-07-14 20:28:46.923004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.850 [2024-07-14 20:28:46.923015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.850 [2024-07-14 20:28:46.923024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.850 [2024-07-14 20:28:46.923034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:87248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.850 [2024-07-14 20:28:46.923042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.850 [2024-07-14 20:28:46.923053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:87256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.850 [2024-07-14 20:28:46.923062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.850 [2024-07-14 20:28:46.923072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:87264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.850 [2024-07-14 20:28:46.923081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.850 [2024-07-14 20:28:46.923091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:87272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.850 [2024-07-14 20:28:46.923100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.850 [2024-07-14 20:28:46.923111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:87280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.850 [2024-07-14 20:28:46.923119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.850 [2024-07-14 20:28:46.923128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:87288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.850 [2024-07-14 20:28:46.923137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.850 [2024-07-14 20:28:46.923146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:87296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.850 [2024-07-14 20:28:46.923154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.850 [2024-07-14 20:28:46.923165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:87304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.850 [2024-07-14 20:28:46.923173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.850 [2024-07-14 20:28:46.923183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:87312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.850 [2024-07-14 20:28:46.923191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.850 [2024-07-14 20:28:46.923200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:87320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.850 [2024-07-14 20:28:46.923209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.850 [2024-07-14 20:28:46.923218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:87328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.850 [2024-07-14 20:28:46.923226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.850 [2024-07-14 20:28:46.923252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:87336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.850 [2024-07-14 20:28:46.923260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.850 [2024-07-14 20:28:46.923270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:87344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.850 [2024-07-14 20:28:46.923278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.850 [2024-07-14 20:28:46.923288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:87352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.850 [2024-07-14 20:28:46.923302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.850 [2024-07-14 20:28:46.923312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:87360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.850 [2024-07-14 20:28:46.923325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.851 [2024-07-14 20:28:46.923335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.851 [2024-07-14 20:28:46.923344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.851 [2024-07-14 20:28:46.923355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:87376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.851 [2024-07-14 20:28:46.923363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.851 [2024-07-14 20:28:46.923376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:87384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.851 [2024-07-14 20:28:46.923386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.851 [2024-07-14 20:28:46.923396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:87392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.851 [2024-07-14 20:28:46.923405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.851 [2024-07-14 20:28:46.923416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:87400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.851 [2024-07-14 20:28:46.923424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.851 [2024-07-14 20:28:46.923435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:87408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.851 [2024-07-14 20:28:46.923443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.851 [2024-07-14 20:28:46.923453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:87416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.851 [2024-07-14 20:28:46.923461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.851 [2024-07-14 20:28:46.923472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:87424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.851 [2024-07-14 20:28:46.923480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.851 [2024-07-14 20:28:46.923490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:87432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.851 [2024-07-14 20:28:46.923498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.851 [2024-07-14 20:28:46.923508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:87440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.851 [2024-07-14 20:28:46.923516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.851 [2024-07-14 20:28:46.923526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:87448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.851 [2024-07-14 20:28:46.923534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.851 [2024-07-14 20:28:46.923544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:87456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.851 [2024-07-14 20:28:46.923552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.851 [2024-07-14 20:28:46.923562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:87464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.851 [2024-07-14 20:28:46.923570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.851 [2024-07-14 20:28:46.923595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:87472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.851 [2024-07-14 20:28:46.923604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.851 [2024-07-14 20:28:46.923614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:87480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.851 [2024-07-14 20:28:46.923622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.851 [2024-07-14 20:28:46.923632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:87488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.851 [2024-07-14 20:28:46.923647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.851 [2024-07-14 20:28:46.923656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:87496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.851 [2024-07-14 20:28:46.923665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.851 [2024-07-14 20:28:46.923675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:87504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.851 [2024-07-14 20:28:46.923684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.851 [2024-07-14 20:28:46.923695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:87512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.851 [2024-07-14 20:28:46.923703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.851 [2024-07-14 20:28:46.923714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:87520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.851 [2024-07-14 20:28:46.923722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.851 [2024-07-14 20:28:46.923732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:87528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.851 [2024-07-14 20:28:46.923740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.851 [2024-07-14 20:28:46.923749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:87536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.851 [2024-07-14 20:28:46.923758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.851 [2024-07-14 20:28:46.923767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:87544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.851 [2024-07-14 20:28:46.923776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.851 [2024-07-14 20:28:46.923785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:87552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.851 [2024-07-14 20:28:46.923793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.851 [2024-07-14 20:28:46.923803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:87560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.851 [2024-07-14 20:28:46.923811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.851 [2024-07-14 20:28:46.923821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:87568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.851 [2024-07-14 20:28:46.923829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.851 [2024-07-14 20:28:46.923838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:87576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.851 [2024-07-14 20:28:46.923847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.851 [2024-07-14 20:28:46.923857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:87584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.851 [2024-07-14 20:28:46.923866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.851 [2024-07-14 20:28:46.923876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:87592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.851 [2024-07-14 20:28:46.923892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.851 [2024-07-14 20:28:46.923903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:87600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.851 [2024-07-14 20:28:46.923912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.851 [2024-07-14 20:28:46.923922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:87608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.851 [2024-07-14 20:28:46.923930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.851 [2024-07-14 20:28:46.923941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:87616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.851 [2024-07-14 20:28:46.923956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.851 [2024-07-14 20:28:46.923967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:87624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.851 [2024-07-14 20:28:46.923976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.851 [2024-07-14 20:28:46.923986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:87632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.851 [2024-07-14 20:28:46.923995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.851 [2024-07-14 20:28:46.924004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:87640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.851 [2024-07-14 20:28:46.924013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.851 [2024-07-14 20:28:46.924022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:87648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.851 [2024-07-14 20:28:46.924031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.851 [2024-07-14 20:28:46.924041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:87656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.851 [2024-07-14 20:28:46.924049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.851 [2024-07-14 20:28:46.924059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:87664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.851 [2024-07-14 20:28:46.924067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.851 [2024-07-14 20:28:46.924076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:87672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.851 [2024-07-14 20:28:46.924085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.851 [2024-07-14 20:28:46.924094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:87680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.851 [2024-07-14 20:28:46.924103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.851 [2024-07-14 20:28:46.924112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:87688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.851 [2024-07-14 20:28:46.924120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.851 [2024-07-14 20:28:46.924130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:87696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.851 [2024-07-14 20:28:46.924138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.851 [2024-07-14 20:28:46.924148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:87704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.851 [2024-07-14 20:28:46.924156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.852 [2024-07-14 20:28:46.924165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:87712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.852 [2024-07-14 20:28:46.924173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.852 [2024-07-14 20:28:46.924183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:87720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.852 [2024-07-14 20:28:46.924192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.852 [2024-07-14 20:28:46.924203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.852 [2024-07-14 20:28:46.924214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.852 [2024-07-14 20:28:46.924224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:87736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.852 [2024-07-14 20:28:46.924232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.852 [2024-07-14 20:28:46.924242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:87744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.852 [2024-07-14 20:28:46.924254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.852 [2024-07-14 20:28:46.924270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:87752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.852 [2024-07-14 20:28:46.924279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.852 [2024-07-14 20:28:46.924289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:87760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.852 [2024-07-14 20:28:46.924298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.852 [2024-07-14 20:28:46.924308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.852 [2024-07-14 20:28:46.924316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.852 [2024-07-14 20:28:46.924325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:87776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.852 [2024-07-14 20:28:46.924333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.852 [2024-07-14 20:28:46.924381] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:57.852 [2024-07-14 20:28:46.924392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87784 len:8 PRP1 0x0 PRP2 0x0 00:28:57.852 [2024-07-14 20:28:46.924401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.852 [2024-07-14 20:28:46.924414] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:57.852 [2024-07-14 20:28:46.924421] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:57.852 [2024-07-14 20:28:46.924429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87792 len:8 PRP1 0x0 PRP2 0x0 00:28:57.852 [2024-07-14 20:28:46.924437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.852 [2024-07-14 20:28:46.924446] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:57.852 [2024-07-14 20:28:46.924453] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:57.852 [2024-07-14 20:28:46.924461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87800 len:8 PRP1 0x0 PRP2 0x0 00:28:57.852 [2024-07-14 20:28:46.924469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.852 [2024-07-14 20:28:46.924478] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:57.852 [2024-07-14 20:28:46.924485] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:57.852 [2024-07-14 20:28:46.924492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87808 len:8 PRP1 0x0 PRP2 0x0 00:28:57.852 [2024-07-14 20:28:46.924501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.852 [2024-07-14 20:28:46.924509] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:57.852 [2024-07-14 20:28:46.924516] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:57.852 [2024-07-14 20:28:46.924523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87816 len:8 PRP1 0x0 PRP2 0x0 00:28:57.852 [2024-07-14 20:28:46.924531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.852 [2024-07-14 20:28:46.924539] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:57.852 [2024-07-14 20:28:46.924546] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:57.852 [2024-07-14 20:28:46.924553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87824 len:8 PRP1 0x0 PRP2 0x0 00:28:57.852 [2024-07-14 20:28:46.924562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.852 [2024-07-14 20:28:46.924575] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:57.852 [2024-07-14 20:28:46.924587] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:57.852 [2024-07-14 20:28:46.924595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87832 len:8 PRP1 0x0 PRP2 0x0 00:28:57.852 [2024-07-14 20:28:46.924604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.852 [2024-07-14 20:28:46.924613] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:57.852 [2024-07-14 20:28:46.924620] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:57.852 [2024-07-14 20:28:46.924627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87840 len:8 PRP1 0x0 PRP2 0x0 00:28:57.852 [2024-07-14 20:28:46.924636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.852 [2024-07-14 20:28:46.924644] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:57.852 [2024-07-14 20:28:46.924651] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:57.852 [2024-07-14 20:28:46.924658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87848 len:8 PRP1 0x0 PRP2 0x0 00:28:57.852 [2024-07-14 20:28:46.924667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.852 [2024-07-14 20:28:46.924675] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:57.852 [2024-07-14 20:28:46.924682] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:57.852 [2024-07-14 20:28:46.924689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87856 len:8 PRP1 0x0 PRP2 0x0 00:28:57.852 [2024-07-14 20:28:46.924697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.852 [2024-07-14 20:28:46.924764] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2047340 was disconnected and freed. reset controller. 00:28:57.852 [2024-07-14 20:28:46.924851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:57.852 [2024-07-14 20:28:46.924867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:57.852 [2024-07-14 20:28:46.924877] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:58.110 [2024-07-14 20:28:46.937232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.110 [2024-07-14 20:28:46.937294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:58.110 [2024-07-14 20:28:46.937309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.110 [2024-07-14 20:28:46.937323] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:58.110 [2024-07-14 20:28:46.937335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.110 [2024-07-14 20:28:46.937348] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2029620 is same with the state(5) to be set 00:28:58.110 [2024-07-14 20:28:46.937668] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.110 [2024-07-14 20:28:46.937709] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2029620 (9): Bad file descriptor 00:28:58.110 [2024-07-14 20:28:46.937883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.110 [2024-07-14 20:28:46.937912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2029620 with addr=10.0.0.2, port=4420 00:28:58.110 [2024-07-14 20:28:46.937936] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2029620 is same with the state(5) to be set 00:28:58.110 [2024-07-14 20:28:46.937960] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2029620 (9): Bad file descriptor 00:28:58.110 [2024-07-14 20:28:46.937983] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.110 [2024-07-14 20:28:46.937996] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.110 [2024-07-14 20:28:46.938010] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.110 [2024-07-14 20:28:46.938036] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.110 [2024-07-14 20:28:46.938052] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.110 20:28:46 nvmf_tcp.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:28:59.045 [2024-07-14 20:28:47.938207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.045 [2024-07-14 20:28:47.938316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2029620 with addr=10.0.0.2, port=4420 00:28:59.045 [2024-07-14 20:28:47.938331] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2029620 is same with the state(5) to be set 00:28:59.045 [2024-07-14 20:28:47.938358] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2029620 (9): Bad file descriptor 00:28:59.045 [2024-07-14 20:28:47.938376] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.045 [2024-07-14 20:28:47.938387] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.045 [2024-07-14 20:28:47.938397] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.045 [2024-07-14 20:28:47.938426] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.045 [2024-07-14 20:28:47.938438] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.045 20:28:47 nvmf_tcp.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:59.303 [2024-07-14 20:28:48.202826] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:59.303 20:28:48 nvmf_tcp.nvmf_timeout -- host/timeout.sh@92 -- # wait 115018 00:29:00.238 [2024-07-14 20:28:48.958627] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:06.792 00:29:06.792 Latency(us) 00:29:06.792 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:06.792 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:29:06.792 Verification LBA range: start 0x0 length 0x4000 00:29:06.792 NVMe0n1 : 10.01 7196.31 28.11 0.00 0.00 17761.98 1802.24 3035150.89 00:29:06.792 =================================================================================================================== 00:29:06.792 Total : 7196.31 28.11 0.00 0.00 17761.98 1802.24 3035150.89 00:29:06.792 0 00:29:06.792 20:28:55 nvmf_tcp.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=115135 00:29:06.792 20:28:55 nvmf_tcp.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:06.792 20:28:55 nvmf_tcp.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:29:07.050 Running I/O for 10 seconds... 00:29:07.985 20:28:56 nvmf_tcp.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:08.247 [2024-07-14 20:28:57.080882] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36ac0 is same with the state(5) to be set 00:29:08.247 [2024-07-14 20:28:57.080984] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36ac0 is same with the state(5) to be set 00:29:08.247 [2024-07-14 20:28:57.080996] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36ac0 is same with the state(5) to be set 00:29:08.247 [2024-07-14 20:28:57.081005] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36ac0 is same with the state(5) to be set 00:29:08.247 [2024-07-14 20:28:57.081014] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36ac0 is same with the state(5) to be set 00:29:08.247 [2024-07-14 20:28:57.081022] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36ac0 is same with the state(5) to be set 00:29:08.247 [2024-07-14 20:28:57.081030] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36ac0 is same with the state(5) to be set 00:29:08.247 [2024-07-14 20:28:57.081038] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36ac0 is same with the state(5) to be set 00:29:08.247 [2024-07-14 20:28:57.081046] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36ac0 is same with the state(5) to be set 00:29:08.247 [2024-07-14 20:28:57.081054] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36ac0 is same with the state(5) to be set 00:29:08.247 [2024-07-14 20:28:57.081061] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36ac0 is same with the state(5) to be set 00:29:08.247 [2024-07-14 20:28:57.081070] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36ac0 is same with the state(5) to be set 00:29:08.247 [2024-07-14 20:28:57.081095] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36ac0 is same with the state(5) to be set 00:29:08.247 [2024-07-14 20:28:57.081103] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36ac0 is same with the state(5) to be set 00:29:08.247 [2024-07-14 20:28:57.081566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:95848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.247 [2024-07-14 20:28:57.081614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.247 [2024-07-14 20:28:57.081638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:95856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.247 [2024-07-14 20:28:57.081649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.247 [2024-07-14 20:28:57.081661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:95864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.247 [2024-07-14 20:28:57.081671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.247 [2024-07-14 20:28:57.081682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:95872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.247 [2024-07-14 20:28:57.081692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.247 [2024-07-14 20:28:57.081703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:95880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.247 [2024-07-14 20:28:57.081712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.247 [2024-07-14 20:28:57.081723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:95888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.247 [2024-07-14 20:28:57.081732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.247 [2024-07-14 20:28:57.081743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:95896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.247 [2024-07-14 20:28:57.081753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.247 [2024-07-14 20:28:57.081763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:95904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.247 [2024-07-14 20:28:57.081772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.247 [2024-07-14 20:28:57.081783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:96240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.247 [2024-07-14 20:28:57.081793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.247 [2024-07-14 20:28:57.081804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:96248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.247 [2024-07-14 20:28:57.081813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.247 [2024-07-14 20:28:57.081823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:96256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.247 [2024-07-14 20:28:57.081833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.247 [2024-07-14 20:28:57.081844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:96264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.247 [2024-07-14 20:28:57.081864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.247 [2024-07-14 20:28:57.081882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:96272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.247 [2024-07-14 20:28:57.081891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.247 [2024-07-14 20:28:57.081902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:96280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.247 [2024-07-14 20:28:57.081911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.247 [2024-07-14 20:28:57.081923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:96288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.247 [2024-07-14 20:28:57.081933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.247 [2024-07-14 20:28:57.081944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:96296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.247 [2024-07-14 20:28:57.081954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.247 [2024-07-14 20:28:57.081964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:96304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.247 [2024-07-14 20:28:57.081975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.247 [2024-07-14 20:28:57.081986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:96312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.248 [2024-07-14 20:28:57.081995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.248 [2024-07-14 20:28:57.082005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:96320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.248 [2024-07-14 20:28:57.082015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.248 [2024-07-14 20:28:57.082026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:96328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.248 [2024-07-14 20:28:57.082035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.248 [2024-07-14 20:28:57.082047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:96336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.248 [2024-07-14 20:28:57.082056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.248 [2024-07-14 20:28:57.082067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:96344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.248 [2024-07-14 20:28:57.082077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.248 [2024-07-14 20:28:57.082088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:96352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.248 [2024-07-14 20:28:57.082097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.248 [2024-07-14 20:28:57.082108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:96360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.248 [2024-07-14 20:28:57.082117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.248 [2024-07-14 20:28:57.082128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:96368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.248 [2024-07-14 20:28:57.082137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.248 [2024-07-14 20:28:57.082147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:96376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.248 [2024-07-14 20:28:57.082156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.248 [2024-07-14 20:28:57.082167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:96384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.248 [2024-07-14 20:28:57.082176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.248 [2024-07-14 20:28:57.082187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:96392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.248 [2024-07-14 20:28:57.082196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.248 [2024-07-14 20:28:57.082206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:96400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.248 [2024-07-14 20:28:57.082215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.248 [2024-07-14 20:28:57.082226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:96408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.248 [2024-07-14 20:28:57.082235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.248 [2024-07-14 20:28:57.082246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:96416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.248 [2024-07-14 20:28:57.082255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.248 [2024-07-14 20:28:57.082265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:95912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.248 [2024-07-14 20:28:57.082274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.248 [2024-07-14 20:28:57.082285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:95920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.248 [2024-07-14 20:28:57.082294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.248 [2024-07-14 20:28:57.082305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:95928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.248 [2024-07-14 20:28:57.082315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.248 [2024-07-14 20:28:57.082326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:95936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.248 [2024-07-14 20:28:57.082335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.248 [2024-07-14 20:28:57.082346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:95944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.248 [2024-07-14 20:28:57.082356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.248 [2024-07-14 20:28:57.082367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:95952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.248 [2024-07-14 20:28:57.082376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.248 [2024-07-14 20:28:57.082387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:95960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.248 [2024-07-14 20:28:57.082396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.248 [2024-07-14 20:28:57.082407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:95968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.248 [2024-07-14 20:28:57.082416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.248 [2024-07-14 20:28:57.082426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:95976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.248 [2024-07-14 20:28:57.082439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.248 [2024-07-14 20:28:57.082450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:95984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.248 [2024-07-14 20:28:57.082459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.248 [2024-07-14 20:28:57.082470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:95992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.248 [2024-07-14 20:28:57.082480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.248 [2024-07-14 20:28:57.082491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:96000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.248 [2024-07-14 20:28:57.082500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.248 [2024-07-14 20:28:57.082511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:96008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.248 [2024-07-14 20:28:57.082520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.248 [2024-07-14 20:28:57.082530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:96016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.248 [2024-07-14 20:28:57.082539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.248 [2024-07-14 20:28:57.082551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:96024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.248 [2024-07-14 20:28:57.082561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.248 [2024-07-14 20:28:57.082572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:96032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.248 [2024-07-14 20:28:57.082581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.248 [2024-07-14 20:28:57.082592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:96424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.248 [2024-07-14 20:28:57.082601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.248 [2024-07-14 20:28:57.082612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:96432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.248 [2024-07-14 20:28:57.082621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.248 [2024-07-14 20:28:57.082633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.248 [2024-07-14 20:28:57.082642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.248 [2024-07-14 20:28:57.082653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:96448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.248 [2024-07-14 20:28:57.082662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.248 [2024-07-14 20:28:57.082673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:96456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.248 [2024-07-14 20:28:57.082682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.248 [2024-07-14 20:28:57.082693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:96464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.248 [2024-07-14 20:28:57.082711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.248 [2024-07-14 20:28:57.082722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:96472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.248 [2024-07-14 20:28:57.082732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.248 [2024-07-14 20:28:57.082743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:96480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.248 [2024-07-14 20:28:57.082752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.248 [2024-07-14 20:28:57.082763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:96488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.248 [2024-07-14 20:28:57.082772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.248 [2024-07-14 20:28:57.082783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:96496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.248 [2024-07-14 20:28:57.082792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.248 [2024-07-14 20:28:57.082803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:96504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.248 [2024-07-14 20:28:57.082812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.248 [2024-07-14 20:28:57.082823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:96512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.248 [2024-07-14 20:28:57.082832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.248 [2024-07-14 20:28:57.082843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:96520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.248 [2024-07-14 20:28:57.082863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.249 [2024-07-14 20:28:57.082875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:96528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.249 [2024-07-14 20:28:57.082885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.249 [2024-07-14 20:28:57.082895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:96536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.249 [2024-07-14 20:28:57.082905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.249 [2024-07-14 20:28:57.082916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:96544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.249 [2024-07-14 20:28:57.082925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.249 [2024-07-14 20:28:57.082945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:96552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.249 [2024-07-14 20:28:57.082960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.249 [2024-07-14 20:28:57.082971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:96560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.249 [2024-07-14 20:28:57.082981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.249 [2024-07-14 20:28:57.082992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:96568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.249 [2024-07-14 20:28:57.083001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.249 [2024-07-14 20:28:57.083012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:96576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.249 [2024-07-14 20:28:57.083022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.249 [2024-07-14 20:28:57.083033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:96584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.249 [2024-07-14 20:28:57.083042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.249 [2024-07-14 20:28:57.083053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:96592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.249 [2024-07-14 20:28:57.083072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.249 [2024-07-14 20:28:57.083083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:96600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.249 [2024-07-14 20:28:57.083093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.249 [2024-07-14 20:28:57.083103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:96608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.249 [2024-07-14 20:28:57.083112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.249 [2024-07-14 20:28:57.083122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:96616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.249 [2024-07-14 20:28:57.083131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.249 [2024-07-14 20:28:57.083142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:96624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.249 [2024-07-14 20:28:57.083151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.249 [2024-07-14 20:28:57.083162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:96632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.249 [2024-07-14 20:28:57.083171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.249 [2024-07-14 20:28:57.083182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.249 [2024-07-14 20:28:57.083191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.249 [2024-07-14 20:28:57.083202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:96648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.249 [2024-07-14 20:28:57.083211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.249 [2024-07-14 20:28:57.083221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:96656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.249 [2024-07-14 20:28:57.083231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.249 [2024-07-14 20:28:57.083242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:96664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.249 [2024-07-14 20:28:57.083251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.249 [2024-07-14 20:28:57.083262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:96672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.249 [2024-07-14 20:28:57.083271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.249 [2024-07-14 20:28:57.083282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:96680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.249 [2024-07-14 20:28:57.083291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.249 [2024-07-14 20:28:57.083302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:96688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.249 [2024-07-14 20:28:57.083311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.249 [2024-07-14 20:28:57.083322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:96696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.249 [2024-07-14 20:28:57.083331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.249 [2024-07-14 20:28:57.083342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:96704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.249 [2024-07-14 20:28:57.083351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.249 [2024-07-14 20:28:57.083362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:96712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.249 [2024-07-14 20:28:57.083371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.249 [2024-07-14 20:28:57.083382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:96720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.249 [2024-07-14 20:28:57.083401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.249 [2024-07-14 20:28:57.083413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:96728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.249 [2024-07-14 20:28:57.083422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.249 [2024-07-14 20:28:57.083433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:96736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.249 [2024-07-14 20:28:57.083442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.249 [2024-07-14 20:28:57.083454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:96744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.249 [2024-07-14 20:28:57.083463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.249 [2024-07-14 20:28:57.083474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:96752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.249 [2024-07-14 20:28:57.083483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.249 [2024-07-14 20:28:57.083493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:96760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.249 [2024-07-14 20:28:57.083502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.249 [2024-07-14 20:28:57.083528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:96768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.249 [2024-07-14 20:28:57.083537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.249 [2024-07-14 20:28:57.083548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:96776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.249 [2024-07-14 20:28:57.083557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.249 [2024-07-14 20:28:57.083568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:96784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.249 [2024-07-14 20:28:57.083576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.249 [2024-07-14 20:28:57.083587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:96792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.249 [2024-07-14 20:28:57.083596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.249 [2024-07-14 20:28:57.083606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:96800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.249 [2024-07-14 20:28:57.083615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.249 [2024-07-14 20:28:57.083626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:96808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.249 [2024-07-14 20:28:57.083634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.249 [2024-07-14 20:28:57.083645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:96040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.249 [2024-07-14 20:28:57.083654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.249 [2024-07-14 20:28:57.083665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:96048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.249 [2024-07-14 20:28:57.083674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.249 [2024-07-14 20:28:57.083684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:96056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.249 [2024-07-14 20:28:57.083693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.249 [2024-07-14 20:28:57.083704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:96064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.249 [2024-07-14 20:28:57.083713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.249 [2024-07-14 20:28:57.083724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:96072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.249 [2024-07-14 20:28:57.083742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.249 [2024-07-14 20:28:57.083753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:96080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.249 [2024-07-14 20:28:57.083762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.249 [2024-07-14 20:28:57.083773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:96088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.249 [2024-07-14 20:28:57.083782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.250 [2024-07-14 20:28:57.083793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:96096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.250 [2024-07-14 20:28:57.083802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.250 [2024-07-14 20:28:57.083813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:96104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.250 [2024-07-14 20:28:57.083839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.250 [2024-07-14 20:28:57.083850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:96112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.250 [2024-07-14 20:28:57.083859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.250 [2024-07-14 20:28:57.083870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.250 [2024-07-14 20:28:57.083879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.250 [2024-07-14 20:28:57.083900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:96128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.250 [2024-07-14 20:28:57.083910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.250 [2024-07-14 20:28:57.083921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:96136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.250 [2024-07-14 20:28:57.083930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.250 [2024-07-14 20:28:57.083942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:96144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.250 [2024-07-14 20:28:57.083951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.250 [2024-07-14 20:28:57.083962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:96152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.250 [2024-07-14 20:28:57.083971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.250 [2024-07-14 20:28:57.083982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:96160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.250 [2024-07-14 20:28:57.083991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.250 [2024-07-14 20:28:57.084001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:96168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.250 [2024-07-14 20:28:57.084010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.250 [2024-07-14 20:28:57.084021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:96176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.250 [2024-07-14 20:28:57.084030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.250 [2024-07-14 20:28:57.084041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:96816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.250 [2024-07-14 20:28:57.084050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.250 [2024-07-14 20:28:57.084060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.250 [2024-07-14 20:28:57.084069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.250 [2024-07-14 20:28:57.084080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:96832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.250 [2024-07-14 20:28:57.084098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.250 [2024-07-14 20:28:57.084109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:96840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.250 [2024-07-14 20:28:57.084118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.250 [2024-07-14 20:28:57.084129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:96848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.250 [2024-07-14 20:28:57.084139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.250 [2024-07-14 20:28:57.084150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:96856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.250 [2024-07-14 20:28:57.084160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.250 [2024-07-14 20:28:57.084170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:96864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.250 [2024-07-14 20:28:57.084179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.250 [2024-07-14 20:28:57.084190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:96184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.250 [2024-07-14 20:28:57.084199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.250 [2024-07-14 20:28:57.084210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:96192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.250 [2024-07-14 20:28:57.084219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.250 [2024-07-14 20:28:57.084229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:96200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.250 [2024-07-14 20:28:57.084238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.250 [2024-07-14 20:28:57.084249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:96208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.250 [2024-07-14 20:28:57.084258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.250 [2024-07-14 20:28:57.084268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:96216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.250 [2024-07-14 20:28:57.084277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.250 [2024-07-14 20:28:57.084289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:96224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.250 [2024-07-14 20:28:57.084298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.250 [2024-07-14 20:28:57.084333] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:08.250 [2024-07-14 20:28:57.084343] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:08.250 [2024-07-14 20:28:57.084351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96232 len:8 PRP1 0x0 PRP2 0x0 00:29:08.250 [2024-07-14 20:28:57.084361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.250 [2024-07-14 20:28:57.084416] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2049770 was disconnected and freed. reset controller. 00:29:08.250 [2024-07-14 20:28:57.084673] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.250 [2024-07-14 20:28:57.084758] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2029620 (9): Bad file descriptor 00:29:08.250 [2024-07-14 20:28:57.084903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.250 [2024-07-14 20:28:57.084926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2029620 with addr=10.0.0.2, port=4420 00:29:08.250 [2024-07-14 20:28:57.084937] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2029620 is same with the state(5) to be set 00:29:08.250 [2024-07-14 20:28:57.084956] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2029620 (9): Bad file descriptor 00:29:08.250 [2024-07-14 20:28:57.084985] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.250 [2024-07-14 20:28:57.084995] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.250 [2024-07-14 20:28:57.085005] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.250 [2024-07-14 20:28:57.085026] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.250 [2024-07-14 20:28:57.085038] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.250 20:28:57 nvmf_tcp.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:29:09.184 [2024-07-14 20:28:58.085178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.184 [2024-07-14 20:28:58.085259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2029620 with addr=10.0.0.2, port=4420 00:29:09.184 [2024-07-14 20:28:58.085292] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2029620 is same with the state(5) to be set 00:29:09.184 [2024-07-14 20:28:58.085324] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2029620 (9): Bad file descriptor 00:29:09.184 [2024-07-14 20:28:58.085343] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.184 [2024-07-14 20:28:58.085352] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.184 [2024-07-14 20:28:58.085364] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.184 [2024-07-14 20:28:58.085393] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.184 [2024-07-14 20:28:58.085404] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.120 [2024-07-14 20:28:59.085563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.120 [2024-07-14 20:28:59.085651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2029620 with addr=10.0.0.2, port=4420 00:29:10.120 [2024-07-14 20:28:59.085668] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2029620 is same with the state(5) to be set 00:29:10.120 [2024-07-14 20:28:59.085697] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2029620 (9): Bad file descriptor 00:29:10.120 [2024-07-14 20:28:59.085714] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.120 [2024-07-14 20:28:59.085723] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.120 [2024-07-14 20:28:59.085733] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.120 [2024-07-14 20:28:59.085761] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.120 [2024-07-14 20:28:59.085773] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:11.055 [2024-07-14 20:29:00.088903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.055 [2024-07-14 20:29:00.088994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2029620 with addr=10.0.0.2, port=4420 00:29:11.055 [2024-07-14 20:29:00.089010] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2029620 is same with the state(5) to be set 00:29:11.055 [2024-07-14 20:29:00.089235] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2029620 (9): Bad file descriptor 00:29:11.055 [2024-07-14 20:29:00.089451] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:11.055 [2024-07-14 20:29:00.089464] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:11.055 [2024-07-14 20:29:00.089474] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:11.055 [2024-07-14 20:29:00.093026] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:11.055 [2024-07-14 20:29:00.093056] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:11.055 20:29:00 nvmf_tcp.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:11.314 [2024-07-14 20:29:00.338761] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:11.314 20:29:00 nvmf_tcp.nvmf_timeout -- host/timeout.sh@103 -- # wait 115135 00:29:12.249 [2024-07-14 20:29:01.125865] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:17.515 00:29:17.515 Latency(us) 00:29:17.515 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:17.515 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:29:17.515 Verification LBA range: start 0x0 length 0x4000 00:29:17.515 NVMe0n1 : 10.01 6244.65 24.39 4061.18 0.00 12396.21 562.27 3019898.88 00:29:17.515 =================================================================================================================== 00:29:17.515 Total : 6244.65 24.39 4061.18 0.00 12396.21 0.00 3019898.88 00:29:17.515 0 00:29:17.515 20:29:05 nvmf_tcp.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 114975 00:29:17.515 20:29:05 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@946 -- # '[' -z 114975 ']' 00:29:17.515 20:29:05 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@950 -- # kill -0 114975 00:29:17.515 20:29:05 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # uname 00:29:17.515 20:29:05 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:17.515 20:29:05 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 114975 00:29:17.515 killing process with pid 114975 00:29:17.515 Received shutdown signal, test time was about 10.000000 seconds 00:29:17.515 00:29:17.515 Latency(us) 00:29:17.515 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:17.515 =================================================================================================================== 00:29:17.515 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:17.515 20:29:05 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:29:17.515 20:29:05 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:29:17.515 20:29:05 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@964 -- # echo 'killing process with pid 114975' 00:29:17.515 20:29:05 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@965 -- # kill 114975 00:29:17.515 20:29:05 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@970 -- # wait 114975 00:29:17.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:17.515 20:29:06 nvmf_tcp.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=115256 00:29:17.515 20:29:06 nvmf_tcp.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:29:17.516 20:29:06 nvmf_tcp.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 115256 /var/tmp/bdevperf.sock 00:29:17.516 20:29:06 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@827 -- # '[' -z 115256 ']' 00:29:17.516 20:29:06 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:17.516 20:29:06 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:17.516 20:29:06 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:17.516 20:29:06 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:17.516 20:29:06 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:17.516 [2024-07-14 20:29:06.245078] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:29:17.516 [2024-07-14 20:29:06.245471] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115256 ] 00:29:17.516 [2024-07-14 20:29:06.381881] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:17.516 [2024-07-14 20:29:06.468645] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:18.111 20:29:07 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:18.111 20:29:07 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@860 -- # return 0 00:29:18.111 20:29:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=115283 00:29:18.111 20:29:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 115256 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:29:18.111 20:29:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:29:18.676 20:29:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:29:18.934 NVMe0n1 00:29:18.934 20:29:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=115338 00:29:18.934 20:29:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:18.934 20:29:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:29:18.934 Running I/O for 10 seconds... 00:29:19.867 20:29:08 nvmf_tcp.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:20.127 [2024-07-14 20:29:09.085479] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.127 [2024-07-14 20:29:09.085550] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.127 [2024-07-14 20:29:09.085573] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.127 [2024-07-14 20:29:09.085582] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.127 [2024-07-14 20:29:09.085590] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.127 [2024-07-14 20:29:09.085598] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.127 [2024-07-14 20:29:09.085607] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.127 [2024-07-14 20:29:09.085616] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.127 [2024-07-14 20:29:09.085624] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.127 [2024-07-14 20:29:09.085632] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.127 [2024-07-14 20:29:09.085640] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.127 [2024-07-14 20:29:09.085648] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.127 [2024-07-14 20:29:09.085655] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.127 [2024-07-14 20:29:09.085663] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.127 [2024-07-14 20:29:09.085670] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.127 [2024-07-14 20:29:09.085678] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.127 [2024-07-14 20:29:09.085685] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.127 [2024-07-14 20:29:09.085692] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.127 [2024-07-14 20:29:09.085700] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.127 [2024-07-14 20:29:09.085707] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.127 [2024-07-14 20:29:09.085714] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.127 [2024-07-14 20:29:09.085721] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.127 [2024-07-14 20:29:09.085729] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.127 [2024-07-14 20:29:09.085736] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.127 [2024-07-14 20:29:09.085743] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.127 [2024-07-14 20:29:09.085750] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.127 [2024-07-14 20:29:09.085758] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.127 [2024-07-14 20:29:09.085765] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.127 [2024-07-14 20:29:09.085772] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.127 [2024-07-14 20:29:09.085780] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.127 [2024-07-14 20:29:09.085788] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.127 [2024-07-14 20:29:09.085795] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.127 [2024-07-14 20:29:09.085802] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.127 [2024-07-14 20:29:09.085810] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.127 [2024-07-14 20:29:09.085818] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.127 [2024-07-14 20:29:09.085826] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.127 [2024-07-14 20:29:09.085833] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.127 [2024-07-14 20:29:09.085841] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.127 [2024-07-14 20:29:09.085849] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.127 [2024-07-14 20:29:09.085886] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.127 [2024-07-14 20:29:09.085906] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.128 [2024-07-14 20:29:09.085914] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.128 [2024-07-14 20:29:09.085922] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.128 [2024-07-14 20:29:09.085931] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.128 [2024-07-14 20:29:09.085938] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.128 [2024-07-14 20:29:09.085946] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.128 [2024-07-14 20:29:09.085954] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.128 [2024-07-14 20:29:09.085961] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.128 [2024-07-14 20:29:09.085969] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.128 [2024-07-14 20:29:09.085983] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.128 [2024-07-14 20:29:09.085991] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.128 [2024-07-14 20:29:09.085998] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.128 [2024-07-14 20:29:09.086005] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.128 [2024-07-14 20:29:09.086012] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.128 [2024-07-14 20:29:09.086020] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.128 [2024-07-14 20:29:09.086033] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.128 [2024-07-14 20:29:09.086040] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.128 [2024-07-14 20:29:09.086048] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.128 [2024-07-14 20:29:09.086055] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.128 [2024-07-14 20:29:09.086062] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.128 [2024-07-14 20:29:09.086070] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.128 [2024-07-14 20:29:09.086077] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.128 [2024-07-14 20:29:09.086085] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.128 [2024-07-14 20:29:09.086092] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.128 [2024-07-14 20:29:09.086099] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.128 [2024-07-14 20:29:09.086106] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.128 [2024-07-14 20:29:09.086114] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.128 [2024-07-14 20:29:09.086121] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.128 [2024-07-14 20:29:09.086130] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.128 [2024-07-14 20:29:09.086138] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.128 [2024-07-14 20:29:09.086145] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.128 [2024-07-14 20:29:09.086152] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.128 [2024-07-14 20:29:09.086160] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.128 [2024-07-14 20:29:09.086167] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.128 [2024-07-14 20:29:09.086174] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.128 [2024-07-14 20:29:09.086182] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.128 [2024-07-14 20:29:09.086189] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.128 [2024-07-14 20:29:09.086214] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.128 [2024-07-14 20:29:09.086221] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.128 [2024-07-14 20:29:09.086228] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.128 [2024-07-14 20:29:09.086236] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.128 [2024-07-14 20:29:09.086243] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.128 [2024-07-14 20:29:09.086251] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.128 [2024-07-14 20:29:09.086258] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.128 [2024-07-14 20:29:09.086265] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.128 [2024-07-14 20:29:09.086273] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.128 [2024-07-14 20:29:09.086280] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.128 [2024-07-14 20:29:09.086288] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.128 [2024-07-14 20:29:09.086296] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.128 [2024-07-14 20:29:09.086304] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.128 [2024-07-14 20:29:09.086311] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.128 [2024-07-14 20:29:09.086318] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.128 [2024-07-14 20:29:09.086324] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.128 [2024-07-14 20:29:09.086333] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.128 [2024-07-14 20:29:09.086340] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.128 [2024-07-14 20:29:09.086347] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.128 [2024-07-14 20:29:09.086355] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.128 [2024-07-14 20:29:09.086363] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.128 [2024-07-14 20:29:09.086371] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.128 [2024-07-14 20:29:09.086379] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.128 [2024-07-14 20:29:09.086387] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.128 [2024-07-14 20:29:09.086395] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.128 [2024-07-14 20:29:09.086403] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.128 [2024-07-14 20:29:09.086411] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.128 [2024-07-14 20:29:09.086418] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.128 [2024-07-14 20:29:09.086425] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.128 [2024-07-14 20:29:09.086432] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.128 [2024-07-14 20:29:09.086439] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.128 [2024-07-14 20:29:09.086446] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.128 [2024-07-14 20:29:09.086456] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.128 [2024-07-14 20:29:09.086463] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.128 [2024-07-14 20:29:09.086471] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.128 [2024-07-14 20:29:09.086478] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.128 [2024-07-14 20:29:09.086485] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.128 [2024-07-14 20:29:09.086492] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.128 [2024-07-14 20:29:09.086500] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.128 [2024-07-14 20:29:09.086507] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.128 [2024-07-14 20:29:09.086515] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.128 [2024-07-14 20:29:09.086522] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.128 [2024-07-14 20:29:09.086529] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.128 [2024-07-14 20:29:09.086536] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.128 [2024-07-14 20:29:09.086543] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.128 [2024-07-14 20:29:09.086550] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.128 [2024-07-14 20:29:09.086557] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.128 [2024-07-14 20:29:09.086565] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.128 [2024-07-14 20:29:09.086572] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a1f0 is same with the state(5) to be set 00:29:20.128 [2024-07-14 20:29:09.086990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:84504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.128 [2024-07-14 20:29:09.087038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.128 [2024-07-14 20:29:09.087061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:124704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.128 [2024-07-14 20:29:09.087071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.128 [2024-07-14 20:29:09.087084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.128 [2024-07-14 20:29:09.087094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.128 [2024-07-14 20:29:09.087105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:43776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.128 [2024-07-14 20:29:09.087114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.128 [2024-07-14 20:29:09.087125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:75936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.128 [2024-07-14 20:29:09.087134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.128 [2024-07-14 20:29:09.087146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:21152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.128 [2024-07-14 20:29:09.087155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.128 [2024-07-14 20:29:09.087166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:117416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.128 [2024-07-14 20:29:09.087175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.128 [2024-07-14 20:29:09.087186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:114792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.128 [2024-07-14 20:29:09.087195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.128 [2024-07-14 20:29:09.087206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:116144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.128 [2024-07-14 20:29:09.087215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.128 [2024-07-14 20:29:09.087226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:16808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.128 [2024-07-14 20:29:09.087236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.128 [2024-07-14 20:29:09.087247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:43264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.128 [2024-07-14 20:29:09.087256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.128 [2024-07-14 20:29:09.087281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:14048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.128 [2024-07-14 20:29:09.087290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.128 [2024-07-14 20:29:09.087301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:23704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.128 [2024-07-14 20:29:09.087310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.128 [2024-07-14 20:29:09.087320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:62600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.128 [2024-07-14 20:29:09.087328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.128 [2024-07-14 20:29:09.087339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:130424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.128 [2024-07-14 20:29:09.087347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.128 [2024-07-14 20:29:09.087358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:31576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.128 [2024-07-14 20:29:09.087367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.128 [2024-07-14 20:29:09.087390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:62024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.128 [2024-07-14 20:29:09.087400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.128 [2024-07-14 20:29:09.087411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:100608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.128 [2024-07-14 20:29:09.087421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.128 [2024-07-14 20:29:09.087433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:22464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.128 [2024-07-14 20:29:09.087442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.128 [2024-07-14 20:29:09.087452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:41064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.128 [2024-07-14 20:29:09.087461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.128 [2024-07-14 20:29:09.087472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:83232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.128 [2024-07-14 20:29:09.087481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.128 [2024-07-14 20:29:09.087492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:125528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.128 [2024-07-14 20:29:09.087500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.128 [2024-07-14 20:29:09.087511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:51432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.128 [2024-07-14 20:29:09.087520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.128 [2024-07-14 20:29:09.087530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:77312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.128 [2024-07-14 20:29:09.087539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.128 [2024-07-14 20:29:09.087550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:81160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.128 [2024-07-14 20:29:09.087558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.128 [2024-07-14 20:29:09.087570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:57680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.128 [2024-07-14 20:29:09.087579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.128 [2024-07-14 20:29:09.087590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:127912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.128 [2024-07-14 20:29:09.087598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.128 [2024-07-14 20:29:09.087609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.128 [2024-07-14 20:29:09.087618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.128 [2024-07-14 20:29:09.087628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:47976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.128 [2024-07-14 20:29:09.087637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.128 [2024-07-14 20:29:09.087649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:123960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.128 [2024-07-14 20:29:09.087658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.128 [2024-07-14 20:29:09.087669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.128 [2024-07-14 20:29:09.087678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.128 [2024-07-14 20:29:09.087688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:36472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.128 [2024-07-14 20:29:09.087697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.128 [2024-07-14 20:29:09.087708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:76568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.128 [2024-07-14 20:29:09.087717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.128 [2024-07-14 20:29:09.087728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.128 [2024-07-14 20:29:09.087737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.128 [2024-07-14 20:29:09.087747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:96224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.128 [2024-07-14 20:29:09.087756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.128 [2024-07-14 20:29:09.087766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:126168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.128 [2024-07-14 20:29:09.087775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.128 [2024-07-14 20:29:09.087786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:61568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.129 [2024-07-14 20:29:09.087794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.129 [2024-07-14 20:29:09.087805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:76344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.129 [2024-07-14 20:29:09.087813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.129 [2024-07-14 20:29:09.087824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:114208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.129 [2024-07-14 20:29:09.087833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.129 [2024-07-14 20:29:09.087843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:68312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.129 [2024-07-14 20:29:09.087852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.129 [2024-07-14 20:29:09.087862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:104840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.129 [2024-07-14 20:29:09.087881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.129 [2024-07-14 20:29:09.087893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:28008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.129 [2024-07-14 20:29:09.087903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.129 [2024-07-14 20:29:09.087914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:6024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.129 [2024-07-14 20:29:09.087922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.129 [2024-07-14 20:29:09.087933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:3072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.129 [2024-07-14 20:29:09.087942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.129 [2024-07-14 20:29:09.087952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:52624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.129 [2024-07-14 20:29:09.087960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.129 [2024-07-14 20:29:09.087978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:56456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.129 [2024-07-14 20:29:09.087987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.129 [2024-07-14 20:29:09.087998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:69160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.129 [2024-07-14 20:29:09.088008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.129 [2024-07-14 20:29:09.088018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:76112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.129 [2024-07-14 20:29:09.088027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.129 [2024-07-14 20:29:09.088037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:23936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.129 [2024-07-14 20:29:09.088046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.129 [2024-07-14 20:29:09.088057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:123240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.129 [2024-07-14 20:29:09.088066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.129 [2024-07-14 20:29:09.088076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:122296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.129 [2024-07-14 20:29:09.088085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.129 [2024-07-14 20:29:09.088095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:49488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.129 [2024-07-14 20:29:09.088104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.129 [2024-07-14 20:29:09.088115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:21432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.129 [2024-07-14 20:29:09.088123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.129 [2024-07-14 20:29:09.088134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:54080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.129 [2024-07-14 20:29:09.088142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.129 [2024-07-14 20:29:09.088153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:74576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.129 [2024-07-14 20:29:09.088161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.129 [2024-07-14 20:29:09.088171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:92152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.129 [2024-07-14 20:29:09.088180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.129 [2024-07-14 20:29:09.088190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:33816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.129 [2024-07-14 20:29:09.088199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.129 [2024-07-14 20:29:09.088210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:59104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.129 [2024-07-14 20:29:09.088218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.129 [2024-07-14 20:29:09.088229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:101784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.129 [2024-07-14 20:29:09.088238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.129 [2024-07-14 20:29:09.088248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:71384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.129 [2024-07-14 20:29:09.088257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.129 [2024-07-14 20:29:09.088267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:31760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.129 [2024-07-14 20:29:09.088276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.129 [2024-07-14 20:29:09.088291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:114304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.129 [2024-07-14 20:29:09.088300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.129 [2024-07-14 20:29:09.088312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:126136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.129 [2024-07-14 20:29:09.088321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.129 [2024-07-14 20:29:09.088331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.129 [2024-07-14 20:29:09.088340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.129 [2024-07-14 20:29:09.088351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.129 [2024-07-14 20:29:09.088359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.129 [2024-07-14 20:29:09.088370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:31216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.129 [2024-07-14 20:29:09.088378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.129 [2024-07-14 20:29:09.088389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:90696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.129 [2024-07-14 20:29:09.088398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.129 [2024-07-14 20:29:09.088409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:85704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.129 [2024-07-14 20:29:09.088417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.129 [2024-07-14 20:29:09.088429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:112568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.129 [2024-07-14 20:29:09.088437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.129 [2024-07-14 20:29:09.088448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:18032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.129 [2024-07-14 20:29:09.088457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.129 [2024-07-14 20:29:09.088467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:45424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.129 [2024-07-14 20:29:09.088476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.129 [2024-07-14 20:29:09.088487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:13912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.129 [2024-07-14 20:29:09.088495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.129 [2024-07-14 20:29:09.088505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:17848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.129 [2024-07-14 20:29:09.088514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.129 [2024-07-14 20:29:09.088524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:28352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.129 [2024-07-14 20:29:09.088533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.129 [2024-07-14 20:29:09.088543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:12000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.129 [2024-07-14 20:29:09.088552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.129 [2024-07-14 20:29:09.088562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:99120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.129 [2024-07-14 20:29:09.088571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.129 [2024-07-14 20:29:09.088581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:12256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.129 [2024-07-14 20:29:09.088590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.129 [2024-07-14 20:29:09.088605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:43816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.129 [2024-07-14 20:29:09.088614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.129 [2024-07-14 20:29:09.088625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.129 [2024-07-14 20:29:09.088635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.129 [2024-07-14 20:29:09.088646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:121432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.129 [2024-07-14 20:29:09.088654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.129 [2024-07-14 20:29:09.088665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:62216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.129 [2024-07-14 20:29:09.088674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.129 [2024-07-14 20:29:09.088685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:127328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.129 [2024-07-14 20:29:09.088694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.129 [2024-07-14 20:29:09.088710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:16008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.129 [2024-07-14 20:29:09.088719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.129 [2024-07-14 20:29:09.088730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:67144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.129 [2024-07-14 20:29:09.088739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.129 [2024-07-14 20:29:09.088749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:34200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.129 [2024-07-14 20:29:09.088757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.129 [2024-07-14 20:29:09.088768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:99520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.129 [2024-07-14 20:29:09.088776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.129 [2024-07-14 20:29:09.088787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:24976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.129 [2024-07-14 20:29:09.088795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.129 [2024-07-14 20:29:09.088805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:100104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.129 [2024-07-14 20:29:09.088814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.129 [2024-07-14 20:29:09.088824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:34680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.129 [2024-07-14 20:29:09.088833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.129 [2024-07-14 20:29:09.088844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:67072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.129 [2024-07-14 20:29:09.088861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.129 [2024-07-14 20:29:09.088874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:5216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.129 [2024-07-14 20:29:09.088883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.129 [2024-07-14 20:29:09.088893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:107032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.129 [2024-07-14 20:29:09.088901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.129 [2024-07-14 20:29:09.088912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:78344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.129 [2024-07-14 20:29:09.088921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.129 [2024-07-14 20:29:09.088936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:107368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.129 [2024-07-14 20:29:09.088945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.129 [2024-07-14 20:29:09.088956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:45864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.129 [2024-07-14 20:29:09.088965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.129 [2024-07-14 20:29:09.088975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:119424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.129 [2024-07-14 20:29:09.088984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.129 [2024-07-14 20:29:09.088994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:10280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.129 [2024-07-14 20:29:09.089003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.129 [2024-07-14 20:29:09.089013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:26808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.129 [2024-07-14 20:29:09.089022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.129 [2024-07-14 20:29:09.089037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:19336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.129 [2024-07-14 20:29:09.089046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.129 [2024-07-14 20:29:09.089056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:14544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.129 [2024-07-14 20:29:09.089065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.129 [2024-07-14 20:29:09.089076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:75336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.129 [2024-07-14 20:29:09.089085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.129 [2024-07-14 20:29:09.089096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:69432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.129 [2024-07-14 20:29:09.089104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.129 [2024-07-14 20:29:09.089115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:19616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.129 [2024-07-14 20:29:09.089124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.129 [2024-07-14 20:29:09.089134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:76536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.129 [2024-07-14 20:29:09.089143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.129 [2024-07-14 20:29:09.089154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:119360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.129 [2024-07-14 20:29:09.089162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.129 [2024-07-14 20:29:09.089173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:28072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.129 [2024-07-14 20:29:09.089182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.129 [2024-07-14 20:29:09.089193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:79640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.129 [2024-07-14 20:29:09.089201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.129 [2024-07-14 20:29:09.089212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:83064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.129 [2024-07-14 20:29:09.089220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.129 [2024-07-14 20:29:09.089231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:19328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.129 [2024-07-14 20:29:09.089239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.129 [2024-07-14 20:29:09.089254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:26704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.129 [2024-07-14 20:29:09.089263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.129 [2024-07-14 20:29:09.089273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:11544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.129 [2024-07-14 20:29:09.089282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.129 [2024-07-14 20:29:09.089292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.130 [2024-07-14 20:29:09.089301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.130 [2024-07-14 20:29:09.089317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:23272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.130 [2024-07-14 20:29:09.089326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.130 [2024-07-14 20:29:09.089336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:123128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.130 [2024-07-14 20:29:09.089345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.130 [2024-07-14 20:29:09.089360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:98872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.130 [2024-07-14 20:29:09.089369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.130 [2024-07-14 20:29:09.089380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:63320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.130 [2024-07-14 20:29:09.089389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.130 [2024-07-14 20:29:09.089399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:90832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.130 [2024-07-14 20:29:09.089408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.130 [2024-07-14 20:29:09.089419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:26336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.130 [2024-07-14 20:29:09.089428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.130 [2024-07-14 20:29:09.089438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:35752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.130 [2024-07-14 20:29:09.089447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.130 [2024-07-14 20:29:09.089457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:103208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.130 [2024-07-14 20:29:09.089466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.130 [2024-07-14 20:29:09.089477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:110384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.130 [2024-07-14 20:29:09.089485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.130 [2024-07-14 20:29:09.089496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:59072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.130 [2024-07-14 20:29:09.089504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.130 [2024-07-14 20:29:09.089515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:102408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.130 [2024-07-14 20:29:09.089523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.130 [2024-07-14 20:29:09.089534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:15216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.130 [2024-07-14 20:29:09.089542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.130 [2024-07-14 20:29:09.089552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:66904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.130 [2024-07-14 20:29:09.089561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.130 [2024-07-14 20:29:09.089576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:80616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.130 [2024-07-14 20:29:09.089585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.130 [2024-07-14 20:29:09.089595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:128888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.130 [2024-07-14 20:29:09.089604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.130 [2024-07-14 20:29:09.089633] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:20.130 [2024-07-14 20:29:09.089643] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:20.130 [2024-07-14 20:29:09.089651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79664 len:8 PRP1 0x0 PRP2 0x0 00:29:20.130 [2024-07-14 20:29:09.089660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.130 [2024-07-14 20:29:09.089714] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1d43460 was disconnected and freed. reset controller. 00:29:20.130 [2024-07-14 20:29:09.090053] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.130 [2024-07-14 20:29:09.090172] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d25620 (9): Bad file descriptor 00:29:20.130 [2024-07-14 20:29:09.090312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.130 [2024-07-14 20:29:09.090339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d25620 with addr=10.0.0.2, port=4420 00:29:20.130 [2024-07-14 20:29:09.090351] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d25620 is same with the state(5) to be set 00:29:20.130 [2024-07-14 20:29:09.090369] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d25620 (9): Bad file descriptor 00:29:20.130 [2024-07-14 20:29:09.090385] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.130 [2024-07-14 20:29:09.090395] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.130 [2024-07-14 20:29:09.090405] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.130 [2024-07-14 20:29:09.090425] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.130 [2024-07-14 20:29:09.090436] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.130 20:29:09 nvmf_tcp.nvmf_timeout -- host/timeout.sh@128 -- # wait 115338 00:29:22.031 [2024-07-14 20:29:11.090622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.031 [2024-07-14 20:29:11.090726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d25620 with addr=10.0.0.2, port=4420 00:29:22.031 [2024-07-14 20:29:11.090744] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d25620 is same with the state(5) to be set 00:29:22.031 [2024-07-14 20:29:11.090794] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d25620 (9): Bad file descriptor 00:29:22.031 [2024-07-14 20:29:11.090815] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:22.031 [2024-07-14 20:29:11.090826] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:22.031 [2024-07-14 20:29:11.090837] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:22.031 [2024-07-14 20:29:11.090866] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.031 [2024-07-14 20:29:11.090890] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:24.561 [2024-07-14 20:29:13.091097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.561 [2024-07-14 20:29:13.091181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d25620 with addr=10.0.0.2, port=4420 00:29:24.561 [2024-07-14 20:29:13.091198] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d25620 is same with the state(5) to be set 00:29:24.561 [2024-07-14 20:29:13.091224] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d25620 (9): Bad file descriptor 00:29:24.561 [2024-07-14 20:29:13.091256] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:24.561 [2024-07-14 20:29:13.091268] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:24.561 [2024-07-14 20:29:13.091279] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:24.561 [2024-07-14 20:29:13.091307] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:24.561 [2024-07-14 20:29:13.091318] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.459 [2024-07-14 20:29:15.091404] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.459 [2024-07-14 20:29:15.091475] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.459 [2024-07-14 20:29:15.091503] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.459 [2024-07-14 20:29:15.091513] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:29:26.459 [2024-07-14 20:29:15.091541] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.026 00:29:27.026 Latency(us) 00:29:27.026 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:27.026 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:29:27.026 NVMe0n1 : 8.18 3006.36 11.74 15.64 0.00 42319.66 2978.91 7015926.69 00:29:27.026 =================================================================================================================== 00:29:27.026 Total : 3006.36 11.74 15.64 0.00 42319.66 2978.91 7015926.69 00:29:27.026 0 00:29:27.285 20:29:16 nvmf_tcp.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:29:27.285 Attaching 5 probes... 00:29:27.285 1386.209577: reset bdev controller NVMe0 00:29:27.285 1386.400804: reconnect bdev controller NVMe0 00:29:27.285 3386.627661: reconnect delay bdev controller NVMe0 00:29:27.285 3386.666969: reconnect bdev controller NVMe0 00:29:27.285 5387.087114: reconnect delay bdev controller NVMe0 00:29:27.285 5387.132360: reconnect bdev controller NVMe0 00:29:27.285 7387.514465: reconnect delay bdev controller NVMe0 00:29:27.285 7387.566751: reconnect bdev controller NVMe0 00:29:27.285 20:29:16 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:29:27.285 20:29:16 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:29:27.285 20:29:16 nvmf_tcp.nvmf_timeout -- host/timeout.sh@136 -- # kill 115283 00:29:27.285 20:29:16 nvmf_tcp.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:29:27.285 20:29:16 nvmf_tcp.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 115256 00:29:27.285 20:29:16 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@946 -- # '[' -z 115256 ']' 00:29:27.285 20:29:16 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@950 -- # kill -0 115256 00:29:27.285 20:29:16 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # uname 00:29:27.285 20:29:16 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:27.285 20:29:16 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 115256 00:29:27.285 killing process with pid 115256 00:29:27.285 Received shutdown signal, test time was about 8.244719 seconds 00:29:27.285 00:29:27.285 Latency(us) 00:29:27.285 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:27.285 =================================================================================================================== 00:29:27.285 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:27.285 20:29:16 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:29:27.285 20:29:16 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:29:27.285 20:29:16 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@964 -- # echo 'killing process with pid 115256' 00:29:27.285 20:29:16 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@965 -- # kill 115256 00:29:27.285 20:29:16 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@970 -- # wait 115256 00:29:27.285 20:29:16 nvmf_tcp.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:27.543 20:29:16 nvmf_tcp.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:29:27.543 20:29:16 nvmf_tcp.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:29:27.543 20:29:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:27.543 20:29:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@117 -- # sync 00:29:27.801 20:29:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:27.801 20:29:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@120 -- # set +e 00:29:27.801 20:29:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:27.801 20:29:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:27.801 rmmod nvme_tcp 00:29:27.801 rmmod nvme_fabrics 00:29:27.801 rmmod nvme_keyring 00:29:27.801 20:29:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:27.801 20:29:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@124 -- # set -e 00:29:27.801 20:29:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@125 -- # return 0 00:29:27.801 20:29:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@489 -- # '[' -n 114702 ']' 00:29:27.801 20:29:16 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@490 -- # killprocess 114702 00:29:27.801 20:29:16 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@946 -- # '[' -z 114702 ']' 00:29:27.801 20:29:16 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@950 -- # kill -0 114702 00:29:27.801 20:29:16 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # uname 00:29:27.801 20:29:16 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:27.801 20:29:16 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 114702 00:29:27.801 killing process with pid 114702 00:29:27.801 20:29:16 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:29:27.801 20:29:16 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:29:27.801 20:29:16 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@964 -- # echo 'killing process with pid 114702' 00:29:27.802 20:29:16 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@965 -- # kill 114702 00:29:27.802 20:29:16 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@970 -- # wait 114702 00:29:28.060 20:29:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:28.060 20:29:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:28.060 20:29:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:28.060 20:29:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:28.060 20:29:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:28.060 20:29:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:28.060 20:29:17 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:28.060 20:29:17 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:28.060 20:29:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:29:28.060 00:29:28.060 real 0m46.471s 00:29:28.060 user 2m15.866s 00:29:28.060 sys 0m5.333s 00:29:28.060 20:29:17 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:28.060 20:29:17 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:28.060 ************************************ 00:29:28.060 END TEST nvmf_timeout 00:29:28.060 ************************************ 00:29:28.060 20:29:17 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ virt == phy ]] 00:29:28.060 20:29:17 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:29:28.060 20:29:17 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:28.060 20:29:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:28.318 20:29:17 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:29:28.318 00:29:28.318 real 21m48.723s 00:29:28.318 user 65m32.542s 00:29:28.318 sys 4m32.232s 00:29:28.318 20:29:17 nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:28.318 ************************************ 00:29:28.318 20:29:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:28.318 END TEST nvmf_tcp 00:29:28.318 ************************************ 00:29:28.318 20:29:17 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:29:28.318 20:29:17 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:29:28.318 20:29:17 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:29:28.318 20:29:17 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:28.318 20:29:17 -- common/autotest_common.sh@10 -- # set +x 00:29:28.318 ************************************ 00:29:28.318 START TEST spdkcli_nvmf_tcp 00:29:28.318 ************************************ 00:29:28.318 20:29:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:29:28.318 * Looking for test storage... 00:29:28.318 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:29:28.318 20:29:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:29:28.318 20:29:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:29:28.318 20:29:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:29:28.318 20:29:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:28.318 20:29:17 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:29:28.318 20:29:17 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:28.318 20:29:17 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:28.318 20:29:17 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:28.318 20:29:17 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:28.318 20:29:17 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:28.318 20:29:17 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:28.318 20:29:17 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:28.318 20:29:17 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:28.318 20:29:17 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:28.318 20:29:17 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:28.318 20:29:17 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:29:28.318 20:29:17 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:29:28.318 20:29:17 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:28.318 20:29:17 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:28.318 20:29:17 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:28.318 20:29:17 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:28.318 20:29:17 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:28.318 20:29:17 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:28.318 20:29:17 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:28.318 20:29:17 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:28.318 20:29:17 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.318 20:29:17 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.318 20:29:17 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.318 20:29:17 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:29:28.319 20:29:17 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.319 20:29:17 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:29:28.319 20:29:17 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:28.319 20:29:17 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:28.319 20:29:17 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:28.319 20:29:17 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:28.319 20:29:17 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:28.319 20:29:17 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:28.319 20:29:17 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:28.319 20:29:17 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:28.319 20:29:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:29:28.319 20:29:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:29:28.319 20:29:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:29:28.319 20:29:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:29:28.319 20:29:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:28.319 20:29:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:28.319 20:29:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:29:28.319 20:29:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=115552 00:29:28.319 20:29:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 115552 00:29:28.319 20:29:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@827 -- # '[' -z 115552 ']' 00:29:28.319 20:29:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:28.319 20:29:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:28.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:28.319 20:29:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:29:28.319 20:29:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:28.319 20:29:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:28.319 20:29:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:28.577 [2024-07-14 20:29:17.410887] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:29:28.577 [2024-07-14 20:29:17.411025] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115552 ] 00:29:28.577 [2024-07-14 20:29:17.549339] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:28.577 [2024-07-14 20:29:17.641041] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:28.577 [2024-07-14 20:29:17.641049] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:29.511 20:29:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:29.511 20:29:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # return 0 00:29:29.511 20:29:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:29:29.511 20:29:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:29.511 20:29:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:29.511 20:29:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:29:29.511 20:29:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:29:29.511 20:29:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:29:29.511 20:29:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:29.511 20:29:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:29.512 20:29:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:29:29.512 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:29:29.512 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:29:29.512 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:29:29.512 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:29:29.512 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:29:29.512 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:29:29.512 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:29.512 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:29:29.512 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:29:29.512 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:29.512 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:29.512 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:29:29.512 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:29.512 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:29.512 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:29:29.512 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:29.512 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:29:29.512 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:29.512 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:29.512 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:29:29.512 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:29:29.512 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:29:29.512 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:29:29.512 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:29.512 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:29:29.512 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:29:29.512 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:29:29.512 ' 00:29:32.794 [2024-07-14 20:29:21.135893] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:33.360 [2024-07-14 20:29:22.408825] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:29:35.885 [2024-07-14 20:29:24.762168] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:29:37.784 [2024-07-14 20:29:26.787204] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:29:39.684 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:29:39.685 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:29:39.685 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:29:39.685 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:29:39.685 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:29:39.685 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:29:39.685 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:29:39.685 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:29:39.685 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:29:39.685 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:29:39.685 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:39.685 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:39.685 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:29:39.685 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:39.685 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:39.685 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:29:39.685 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:39.685 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:29:39.685 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:29:39.685 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:39.685 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:29:39.685 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:29:39.685 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:29:39.685 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:29:39.685 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:39.685 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:29:39.685 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:29:39.685 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:29:39.685 20:29:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:29:39.685 20:29:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:39.685 20:29:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:39.685 20:29:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:29:39.685 20:29:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:39.685 20:29:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:39.685 20:29:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:29:39.685 20:29:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /nvmf 00:29:39.943 20:29:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:29:39.943 20:29:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:29:39.943 20:29:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:29:39.943 20:29:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:39.943 20:29:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:39.943 20:29:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:29:39.943 20:29:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:39.943 20:29:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:40.201 20:29:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:29:40.201 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:29:40.201 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:29:40.201 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:29:40.201 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:29:40.201 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:29:40.201 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:29:40.201 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:29:40.201 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:29:40.201 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:29:40.201 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:29:40.201 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:29:40.201 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:29:40.201 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:29:40.201 ' 00:29:45.463 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:29:45.463 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:29:45.463 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:29:45.463 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:29:45.463 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:29:45.463 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:29:45.463 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:29:45.463 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:29:45.463 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:29:45.463 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:29:45.463 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:29:45.463 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:29:45.463 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:29:45.464 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:29:45.464 20:29:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:29:45.464 20:29:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:45.464 20:29:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:45.464 20:29:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 115552 00:29:45.464 20:29:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # '[' -z 115552 ']' 00:29:45.464 20:29:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # kill -0 115552 00:29:45.464 20:29:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # uname 00:29:45.464 20:29:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:45.464 20:29:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 115552 00:29:45.464 20:29:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:29:45.464 20:29:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:29:45.464 20:29:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 115552' 00:29:45.464 killing process with pid 115552 00:29:45.464 20:29:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@965 -- # kill 115552 00:29:45.464 20:29:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@970 -- # wait 115552 00:29:45.723 20:29:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:29:45.723 20:29:34 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:29:45.723 20:29:34 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 115552 ']' 00:29:45.723 20:29:34 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 115552 00:29:45.723 20:29:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # '[' -z 115552 ']' 00:29:45.723 Process with pid 115552 is not found 00:29:45.723 20:29:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # kill -0 115552 00:29:45.723 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (115552) - No such process 00:29:45.723 20:29:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # echo 'Process with pid 115552 is not found' 00:29:45.723 20:29:34 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:29:45.723 20:29:34 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:29:45.723 20:29:34 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_nvmf.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:29:45.723 ************************************ 00:29:45.723 END TEST spdkcli_nvmf_tcp 00:29:45.723 ************************************ 00:29:45.723 00:29:45.723 real 0m17.480s 00:29:45.723 user 0m37.625s 00:29:45.723 sys 0m0.977s 00:29:45.723 20:29:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:45.723 20:29:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:45.723 20:29:34 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:29:45.723 20:29:34 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:29:45.723 20:29:34 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:45.723 20:29:34 -- common/autotest_common.sh@10 -- # set +x 00:29:45.723 ************************************ 00:29:45.723 START TEST nvmf_identify_passthru 00:29:45.723 ************************************ 00:29:45.723 20:29:34 nvmf_identify_passthru -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:29:45.983 * Looking for test storage... 00:29:45.983 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:29:45.983 20:29:34 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:45.983 20:29:34 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:29:45.983 20:29:34 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:45.983 20:29:34 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:45.983 20:29:34 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:45.983 20:29:34 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:45.983 20:29:34 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:45.983 20:29:34 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:45.983 20:29:34 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:45.983 20:29:34 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:45.983 20:29:34 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:45.983 20:29:34 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:45.983 20:29:34 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:29:45.983 20:29:34 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:29:45.983 20:29:34 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:45.983 20:29:34 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:45.983 20:29:34 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:45.983 20:29:34 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:45.983 20:29:34 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:45.983 20:29:34 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:45.983 20:29:34 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:45.983 20:29:34 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:45.983 20:29:34 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.983 20:29:34 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.983 20:29:34 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.983 20:29:34 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:29:45.983 20:29:34 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.983 20:29:34 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:29:45.983 20:29:34 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:45.983 20:29:34 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:45.983 20:29:34 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:45.983 20:29:34 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:45.983 20:29:34 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:45.983 20:29:34 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:45.983 20:29:34 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:45.983 20:29:34 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:45.983 20:29:34 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:45.983 20:29:34 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:45.983 20:29:34 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:45.983 20:29:34 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:45.983 20:29:34 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.983 20:29:34 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.983 20:29:34 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.983 20:29:34 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:29:45.983 20:29:34 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.983 20:29:34 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:29:45.983 20:29:34 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:45.983 20:29:34 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:45.983 20:29:34 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:45.983 20:29:34 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:45.983 20:29:34 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:45.984 20:29:34 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:45.984 20:29:34 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:45.984 20:29:34 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:45.984 20:29:34 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:29:45.984 20:29:34 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:29:45.984 20:29:34 nvmf_identify_passthru -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:29:45.984 20:29:34 nvmf_identify_passthru -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:29:45.984 20:29:34 nvmf_identify_passthru -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:29:45.984 20:29:34 nvmf_identify_passthru -- nvmf/common.sh@432 -- # nvmf_veth_init 00:29:45.984 20:29:34 nvmf_identify_passthru -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:45.984 20:29:34 nvmf_identify_passthru -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:45.984 20:29:34 nvmf_identify_passthru -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:29:45.984 20:29:34 nvmf_identify_passthru -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:29:45.984 20:29:34 nvmf_identify_passthru -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:29:45.984 20:29:34 nvmf_identify_passthru -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:29:45.984 20:29:34 nvmf_identify_passthru -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:29:45.984 20:29:34 nvmf_identify_passthru -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:45.984 20:29:34 nvmf_identify_passthru -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:29:45.984 20:29:34 nvmf_identify_passthru -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:29:45.984 20:29:34 nvmf_identify_passthru -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:29:45.984 20:29:34 nvmf_identify_passthru -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:29:45.984 20:29:34 nvmf_identify_passthru -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:29:45.984 20:29:34 nvmf_identify_passthru -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:29:45.984 Cannot find device "nvmf_tgt_br" 00:29:45.984 20:29:34 nvmf_identify_passthru -- nvmf/common.sh@155 -- # true 00:29:45.984 20:29:34 nvmf_identify_passthru -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:29:45.984 Cannot find device "nvmf_tgt_br2" 00:29:45.984 20:29:34 nvmf_identify_passthru -- nvmf/common.sh@156 -- # true 00:29:45.984 20:29:34 nvmf_identify_passthru -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:29:45.984 20:29:34 nvmf_identify_passthru -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:29:45.984 Cannot find device "nvmf_tgt_br" 00:29:45.984 20:29:34 nvmf_identify_passthru -- nvmf/common.sh@158 -- # true 00:29:45.984 20:29:34 nvmf_identify_passthru -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:29:45.984 Cannot find device "nvmf_tgt_br2" 00:29:45.984 20:29:34 nvmf_identify_passthru -- nvmf/common.sh@159 -- # true 00:29:45.984 20:29:34 nvmf_identify_passthru -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:29:45.984 20:29:34 nvmf_identify_passthru -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:29:45.984 20:29:35 nvmf_identify_passthru -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:45.984 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:45.984 20:29:35 nvmf_identify_passthru -- nvmf/common.sh@162 -- # true 00:29:45.984 20:29:35 nvmf_identify_passthru -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:45.984 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:45.984 20:29:35 nvmf_identify_passthru -- nvmf/common.sh@163 -- # true 00:29:45.984 20:29:35 nvmf_identify_passthru -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:29:45.984 20:29:35 nvmf_identify_passthru -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:29:45.984 20:29:35 nvmf_identify_passthru -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:29:45.984 20:29:35 nvmf_identify_passthru -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:29:45.984 20:29:35 nvmf_identify_passthru -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:29:46.243 20:29:35 nvmf_identify_passthru -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:29:46.243 20:29:35 nvmf_identify_passthru -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:29:46.243 20:29:35 nvmf_identify_passthru -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:29:46.243 20:29:35 nvmf_identify_passthru -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:29:46.243 20:29:35 nvmf_identify_passthru -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:29:46.243 20:29:35 nvmf_identify_passthru -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:29:46.243 20:29:35 nvmf_identify_passthru -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:29:46.243 20:29:35 nvmf_identify_passthru -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:29:46.243 20:29:35 nvmf_identify_passthru -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:29:46.243 20:29:35 nvmf_identify_passthru -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:29:46.243 20:29:35 nvmf_identify_passthru -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:29:46.243 20:29:35 nvmf_identify_passthru -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:29:46.243 20:29:35 nvmf_identify_passthru -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:29:46.243 20:29:35 nvmf_identify_passthru -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:29:46.243 20:29:35 nvmf_identify_passthru -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:29:46.243 20:29:35 nvmf_identify_passthru -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:29:46.243 20:29:35 nvmf_identify_passthru -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:29:46.243 20:29:35 nvmf_identify_passthru -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:29:46.243 20:29:35 nvmf_identify_passthru -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:29:46.243 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:46.243 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:29:46.243 00:29:46.243 --- 10.0.0.2 ping statistics --- 00:29:46.243 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:46.243 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:29:46.243 20:29:35 nvmf_identify_passthru -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:29:46.243 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:29:46.243 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:29:46.243 00:29:46.243 --- 10.0.0.3 ping statistics --- 00:29:46.243 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:46.243 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:29:46.243 20:29:35 nvmf_identify_passthru -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:29:46.243 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:46.243 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:29:46.243 00:29:46.243 --- 10.0.0.1 ping statistics --- 00:29:46.243 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:46.243 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:29:46.243 20:29:35 nvmf_identify_passthru -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:46.243 20:29:35 nvmf_identify_passthru -- nvmf/common.sh@433 -- # return 0 00:29:46.243 20:29:35 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:46.243 20:29:35 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:46.243 20:29:35 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:46.243 20:29:35 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:46.243 20:29:35 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:46.243 20:29:35 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:46.243 20:29:35 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:46.243 20:29:35 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:29:46.243 20:29:35 nvmf_identify_passthru -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:46.243 20:29:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:46.243 20:29:35 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:29:46.243 20:29:35 nvmf_identify_passthru -- common/autotest_common.sh@1520 -- # bdfs=() 00:29:46.243 20:29:35 nvmf_identify_passthru -- common/autotest_common.sh@1520 -- # local bdfs 00:29:46.243 20:29:35 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # bdfs=($(get_nvme_bdfs)) 00:29:46.243 20:29:35 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # get_nvme_bdfs 00:29:46.243 20:29:35 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:29:46.243 20:29:35 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:29:46.243 20:29:35 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:46.243 20:29:35 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:29:46.243 20:29:35 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:29:46.243 20:29:35 nvmf_identify_passthru -- common/autotest_common.sh@1511 -- # (( 2 == 0 )) 00:29:46.243 20:29:35 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:29:46.243 20:29:35 nvmf_identify_passthru -- common/autotest_common.sh@1523 -- # echo 0000:00:10.0 00:29:46.243 20:29:35 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:00:10.0 00:29:46.243 20:29:35 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:00:10.0 ']' 00:29:46.243 20:29:35 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:29:46.243 20:29:35 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:29:46.243 20:29:35 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:29:46.502 20:29:35 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=12340 00:29:46.502 20:29:35 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:29:46.502 20:29:35 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:29:46.502 20:29:35 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:29:46.761 20:29:35 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=QEMU 00:29:46.761 20:29:35 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:29:46.761 20:29:35 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:46.761 20:29:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:46.761 20:29:35 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:29:46.761 20:29:35 nvmf_identify_passthru -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:46.761 20:29:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:46.761 20:29:35 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=116044 00:29:46.761 20:29:35 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:29:46.761 20:29:35 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:46.761 20:29:35 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 116044 00:29:46.761 20:29:35 nvmf_identify_passthru -- common/autotest_common.sh@827 -- # '[' -z 116044 ']' 00:29:46.761 20:29:35 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:46.761 20:29:35 nvmf_identify_passthru -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:46.761 20:29:35 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:46.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:46.761 20:29:35 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:46.761 20:29:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:46.761 [2024-07-14 20:29:35.758211] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:29:46.761 [2024-07-14 20:29:35.758496] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:47.020 [2024-07-14 20:29:35.898686] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:47.020 [2024-07-14 20:29:35.995047] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:47.020 [2024-07-14 20:29:35.995109] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:47.020 [2024-07-14 20:29:35.995121] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:47.020 [2024-07-14 20:29:35.995130] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:47.020 [2024-07-14 20:29:35.995136] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:47.020 [2024-07-14 20:29:35.995321] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:47.020 [2024-07-14 20:29:35.995654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:47.020 [2024-07-14 20:29:35.996562] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:47.020 [2024-07-14 20:29:35.996618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:47.955 20:29:36 nvmf_identify_passthru -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:47.955 20:29:36 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # return 0 00:29:47.955 20:29:36 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:29:47.955 20:29:36 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:47.955 20:29:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:47.955 20:29:36 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:47.955 20:29:36 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:29:47.955 20:29:36 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:47.955 20:29:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:47.955 [2024-07-14 20:29:36.888194] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:29:47.955 20:29:36 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:47.955 20:29:36 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:47.955 20:29:36 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:47.955 20:29:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:47.955 [2024-07-14 20:29:36.902536] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:47.955 20:29:36 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:47.955 20:29:36 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:29:47.955 20:29:36 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:47.955 20:29:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:47.956 20:29:36 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:29:47.956 20:29:36 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:47.956 20:29:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:47.956 Nvme0n1 00:29:47.956 20:29:37 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:47.956 20:29:37 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:29:47.956 20:29:37 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:47.956 20:29:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:48.214 20:29:37 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:48.214 20:29:37 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:29:48.214 20:29:37 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:48.214 20:29:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:48.214 20:29:37 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:48.214 20:29:37 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:48.214 20:29:37 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:48.214 20:29:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:48.214 [2024-07-14 20:29:37.056590] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:48.214 20:29:37 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:48.214 20:29:37 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:29:48.214 20:29:37 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:48.214 20:29:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:48.214 [ 00:29:48.214 { 00:29:48.214 "allow_any_host": true, 00:29:48.214 "hosts": [], 00:29:48.214 "listen_addresses": [], 00:29:48.214 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:48.214 "subtype": "Discovery" 00:29:48.214 }, 00:29:48.214 { 00:29:48.214 "allow_any_host": true, 00:29:48.214 "hosts": [], 00:29:48.214 "listen_addresses": [ 00:29:48.214 { 00:29:48.214 "adrfam": "IPv4", 00:29:48.214 "traddr": "10.0.0.2", 00:29:48.214 "trsvcid": "4420", 00:29:48.214 "trtype": "TCP" 00:29:48.214 } 00:29:48.214 ], 00:29:48.214 "max_cntlid": 65519, 00:29:48.214 "max_namespaces": 1, 00:29:48.214 "min_cntlid": 1, 00:29:48.214 "model_number": "SPDK bdev Controller", 00:29:48.214 "namespaces": [ 00:29:48.214 { 00:29:48.214 "bdev_name": "Nvme0n1", 00:29:48.214 "name": "Nvme0n1", 00:29:48.214 "nguid": "D526FBFC5F4041C98F9FA1034A243A77", 00:29:48.214 "nsid": 1, 00:29:48.214 "uuid": "d526fbfc-5f40-41c9-8f9f-a1034a243a77" 00:29:48.214 } 00:29:48.214 ], 00:29:48.214 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:48.214 "serial_number": "SPDK00000000000001", 00:29:48.214 "subtype": "NVMe" 00:29:48.214 } 00:29:48.214 ] 00:29:48.214 20:29:37 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:48.214 20:29:37 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:29:48.214 20:29:37 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:29:48.214 20:29:37 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:29:48.214 20:29:37 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=12340 00:29:48.214 20:29:37 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:29:48.472 20:29:37 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:29:48.472 20:29:37 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:29:48.472 20:29:37 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=QEMU 00:29:48.472 20:29:37 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' 12340 '!=' 12340 ']' 00:29:48.472 20:29:37 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' QEMU '!=' QEMU ']' 00:29:48.472 20:29:37 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:48.472 20:29:37 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:48.472 20:29:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:48.472 20:29:37 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:48.472 20:29:37 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:29:48.472 20:29:37 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:29:48.472 20:29:37 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:48.472 20:29:37 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:29:48.731 20:29:37 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:48.731 20:29:37 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:29:48.731 20:29:37 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:48.731 20:29:37 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:48.731 rmmod nvme_tcp 00:29:48.731 rmmod nvme_fabrics 00:29:48.731 rmmod nvme_keyring 00:29:48.731 20:29:37 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:48.731 20:29:37 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:29:48.731 20:29:37 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:29:48.731 20:29:37 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 116044 ']' 00:29:48.731 20:29:37 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 116044 00:29:48.731 20:29:37 nvmf_identify_passthru -- common/autotest_common.sh@946 -- # '[' -z 116044 ']' 00:29:48.731 20:29:37 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # kill -0 116044 00:29:48.731 20:29:37 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # uname 00:29:48.731 20:29:37 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:48.731 20:29:37 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 116044 00:29:48.731 killing process with pid 116044 00:29:48.731 20:29:37 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:29:48.731 20:29:37 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:29:48.731 20:29:37 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # echo 'killing process with pid 116044' 00:29:48.731 20:29:37 nvmf_identify_passthru -- common/autotest_common.sh@965 -- # kill 116044 00:29:48.731 20:29:37 nvmf_identify_passthru -- common/autotest_common.sh@970 -- # wait 116044 00:29:49.002 20:29:37 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:49.002 20:29:37 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:49.002 20:29:37 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:49.002 20:29:37 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:49.002 20:29:37 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:49.002 20:29:37 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:49.002 20:29:37 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:49.002 20:29:37 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:49.002 20:29:37 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:29:49.002 ************************************ 00:29:49.002 END TEST nvmf_identify_passthru 00:29:49.002 ************************************ 00:29:49.002 00:29:49.002 real 0m3.207s 00:29:49.002 user 0m8.000s 00:29:49.002 sys 0m0.835s 00:29:49.002 20:29:37 nvmf_identify_passthru -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:49.002 20:29:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:49.002 20:29:38 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:29:49.002 20:29:38 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:29:49.002 20:29:38 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:49.002 20:29:38 -- common/autotest_common.sh@10 -- # set +x 00:29:49.002 ************************************ 00:29:49.002 START TEST nvmf_dif 00:29:49.002 ************************************ 00:29:49.002 20:29:38 nvmf_dif -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:29:49.273 * Looking for test storage... 00:29:49.273 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:29:49.273 20:29:38 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:49.273 20:29:38 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:29:49.273 20:29:38 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:49.273 20:29:38 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:49.273 20:29:38 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:49.273 20:29:38 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:49.273 20:29:38 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:49.273 20:29:38 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:49.273 20:29:38 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:49.273 20:29:38 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:49.273 20:29:38 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:49.273 20:29:38 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:49.273 20:29:38 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:29:49.273 20:29:38 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:29:49.273 20:29:38 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:49.273 20:29:38 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:49.273 20:29:38 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:49.273 20:29:38 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:49.273 20:29:38 nvmf_dif -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:49.273 20:29:38 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:49.273 20:29:38 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:49.273 20:29:38 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:49.273 20:29:38 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.273 20:29:38 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.273 20:29:38 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.274 20:29:38 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:29:49.274 20:29:38 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.274 20:29:38 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:29:49.274 20:29:38 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:49.274 20:29:38 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:49.274 20:29:38 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:49.274 20:29:38 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:49.274 20:29:38 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:49.274 20:29:38 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:49.274 20:29:38 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:49.274 20:29:38 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:49.274 20:29:38 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:29:49.274 20:29:38 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:29:49.274 20:29:38 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:29:49.274 20:29:38 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:29:49.274 20:29:38 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:29:49.274 20:29:38 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:49.274 20:29:38 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:49.274 20:29:38 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:49.274 20:29:38 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:49.274 20:29:38 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:49.274 20:29:38 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:49.274 20:29:38 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:49.274 20:29:38 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:49.274 20:29:38 nvmf_dif -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:29:49.274 20:29:38 nvmf_dif -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:29:49.274 20:29:38 nvmf_dif -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:29:49.274 20:29:38 nvmf_dif -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:29:49.274 20:29:38 nvmf_dif -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:29:49.274 20:29:38 nvmf_dif -- nvmf/common.sh@432 -- # nvmf_veth_init 00:29:49.274 20:29:38 nvmf_dif -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:49.274 20:29:38 nvmf_dif -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:49.274 20:29:38 nvmf_dif -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:29:49.274 20:29:38 nvmf_dif -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:29:49.274 20:29:38 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:29:49.274 20:29:38 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:29:49.274 20:29:38 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:29:49.274 20:29:38 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:49.274 20:29:38 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:29:49.274 20:29:38 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:29:49.274 20:29:38 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:29:49.274 20:29:38 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:29:49.274 20:29:38 nvmf_dif -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:29:49.274 20:29:38 nvmf_dif -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:29:49.274 Cannot find device "nvmf_tgt_br" 00:29:49.274 20:29:38 nvmf_dif -- nvmf/common.sh@155 -- # true 00:29:49.274 20:29:38 nvmf_dif -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:29:49.274 Cannot find device "nvmf_tgt_br2" 00:29:49.274 20:29:38 nvmf_dif -- nvmf/common.sh@156 -- # true 00:29:49.274 20:29:38 nvmf_dif -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:29:49.274 20:29:38 nvmf_dif -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:29:49.274 Cannot find device "nvmf_tgt_br" 00:29:49.274 20:29:38 nvmf_dif -- nvmf/common.sh@158 -- # true 00:29:49.274 20:29:38 nvmf_dif -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:29:49.274 Cannot find device "nvmf_tgt_br2" 00:29:49.274 20:29:38 nvmf_dif -- nvmf/common.sh@159 -- # true 00:29:49.274 20:29:38 nvmf_dif -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:29:49.274 20:29:38 nvmf_dif -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:29:49.274 20:29:38 nvmf_dif -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:49.274 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:49.274 20:29:38 nvmf_dif -- nvmf/common.sh@162 -- # true 00:29:49.274 20:29:38 nvmf_dif -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:49.274 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:49.274 20:29:38 nvmf_dif -- nvmf/common.sh@163 -- # true 00:29:49.274 20:29:38 nvmf_dif -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:29:49.274 20:29:38 nvmf_dif -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:29:49.274 20:29:38 nvmf_dif -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:29:49.274 20:29:38 nvmf_dif -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:29:49.274 20:29:38 nvmf_dif -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:29:49.274 20:29:38 nvmf_dif -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:29:49.274 20:29:38 nvmf_dif -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:29:49.274 20:29:38 nvmf_dif -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:29:49.274 20:29:38 nvmf_dif -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:29:49.532 20:29:38 nvmf_dif -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:29:49.532 20:29:38 nvmf_dif -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:29:49.532 20:29:38 nvmf_dif -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:29:49.532 20:29:38 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:29:49.532 20:29:38 nvmf_dif -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:29:49.532 20:29:38 nvmf_dif -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:29:49.532 20:29:38 nvmf_dif -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:29:49.532 20:29:38 nvmf_dif -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:29:49.532 20:29:38 nvmf_dif -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:29:49.532 20:29:38 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:29:49.532 20:29:38 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:29:49.532 20:29:38 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:29:49.533 20:29:38 nvmf_dif -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:29:49.533 20:29:38 nvmf_dif -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:29:49.533 20:29:38 nvmf_dif -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:29:49.533 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:49.533 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:29:49.533 00:29:49.533 --- 10.0.0.2 ping statistics --- 00:29:49.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:49.533 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:29:49.533 20:29:38 nvmf_dif -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:29:49.533 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:29:49.533 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:29:49.533 00:29:49.533 --- 10.0.0.3 ping statistics --- 00:29:49.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:49.533 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:29:49.533 20:29:38 nvmf_dif -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:29:49.533 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:49.533 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:29:49.533 00:29:49.533 --- 10.0.0.1 ping statistics --- 00:29:49.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:49.533 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:29:49.533 20:29:38 nvmf_dif -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:49.533 20:29:38 nvmf_dif -- nvmf/common.sh@433 -- # return 0 00:29:49.533 20:29:38 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:29:49.533 20:29:38 nvmf_dif -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:29:49.791 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:49.791 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:29:49.791 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:29:50.050 20:29:38 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:50.050 20:29:38 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:50.050 20:29:38 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:50.050 20:29:38 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:50.050 20:29:38 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:50.050 20:29:38 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:50.050 20:29:38 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:29:50.050 20:29:38 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:29:50.050 20:29:38 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:50.050 20:29:38 nvmf_dif -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:50.050 20:29:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:50.050 20:29:38 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=116388 00:29:50.050 20:29:38 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:29:50.050 20:29:38 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 116388 00:29:50.050 20:29:38 nvmf_dif -- common/autotest_common.sh@827 -- # '[' -z 116388 ']' 00:29:50.050 20:29:38 nvmf_dif -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:50.050 20:29:38 nvmf_dif -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:50.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:50.050 20:29:38 nvmf_dif -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:50.050 20:29:38 nvmf_dif -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:50.050 20:29:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:50.050 [2024-07-14 20:29:38.967026] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:29:50.050 [2024-07-14 20:29:38.967137] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:50.050 [2024-07-14 20:29:39.106652] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:50.309 [2024-07-14 20:29:39.221084] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:50.309 [2024-07-14 20:29:39.221147] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:50.309 [2024-07-14 20:29:39.221167] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:50.309 [2024-07-14 20:29:39.221178] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:50.309 [2024-07-14 20:29:39.221188] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:50.309 [2024-07-14 20:29:39.221220] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:51.243 20:29:39 nvmf_dif -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:51.243 20:29:39 nvmf_dif -- common/autotest_common.sh@860 -- # return 0 00:29:51.243 20:29:39 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:51.243 20:29:39 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:51.243 20:29:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:51.243 20:29:40 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:51.243 20:29:40 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:29:51.243 20:29:40 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:29:51.243 20:29:40 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:51.243 20:29:40 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:51.243 [2024-07-14 20:29:40.035859] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:51.243 20:29:40 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:51.243 20:29:40 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:29:51.243 20:29:40 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:29:51.243 20:29:40 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:51.243 20:29:40 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:51.243 ************************************ 00:29:51.243 START TEST fio_dif_1_default 00:29:51.243 ************************************ 00:29:51.243 20:29:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1121 -- # fio_dif_1 00:29:51.243 20:29:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:29:51.243 20:29:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:29:51.243 20:29:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:29:51.243 20:29:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:29:51.243 20:29:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:29:51.243 20:29:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:29:51.243 20:29:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:51.243 20:29:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:51.243 bdev_null0 00:29:51.243 20:29:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:51.243 20:29:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:51.243 20:29:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:51.243 20:29:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:51.243 20:29:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:51.243 20:29:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:51.243 20:29:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:51.243 20:29:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:51.243 20:29:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:51.243 20:29:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:51.243 20:29:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:51.243 20:29:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:51.243 [2024-07-14 20:29:40.088020] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:51.244 20:29:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:51.244 20:29:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:29:51.244 20:29:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:29:51.244 20:29:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:29:51.244 20:29:40 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:29:51.244 20:29:40 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:29:51.244 20:29:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:51.244 20:29:40 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:51.244 20:29:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:51.244 20:29:40 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:51.244 { 00:29:51.244 "params": { 00:29:51.244 "name": "Nvme$subsystem", 00:29:51.244 "trtype": "$TEST_TRANSPORT", 00:29:51.244 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:51.244 "adrfam": "ipv4", 00:29:51.244 "trsvcid": "$NVMF_PORT", 00:29:51.244 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:51.244 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:51.244 "hdgst": ${hdgst:-false}, 00:29:51.244 "ddgst": ${ddgst:-false} 00:29:51.244 }, 00:29:51.244 "method": "bdev_nvme_attach_controller" 00:29:51.244 } 00:29:51.244 EOF 00:29:51.244 )") 00:29:51.244 20:29:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:29:51.244 20:29:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:29:51.244 20:29:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:29:51.244 20:29:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:29:51.244 20:29:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:51.244 20:29:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # local sanitizers 00:29:51.244 20:29:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:51.244 20:29:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # shift 00:29:51.244 20:29:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local asan_lib= 00:29:51.244 20:29:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:51.244 20:29:40 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:29:51.244 20:29:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:51.244 20:29:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libasan 00:29:51.244 20:29:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:29:51.244 20:29:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:51.244 20:29:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:29:51.244 20:29:40 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:29:51.244 20:29:40 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:29:51.244 20:29:40 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:51.244 "params": { 00:29:51.244 "name": "Nvme0", 00:29:51.244 "trtype": "tcp", 00:29:51.244 "traddr": "10.0.0.2", 00:29:51.244 "adrfam": "ipv4", 00:29:51.244 "trsvcid": "4420", 00:29:51.244 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:51.244 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:51.244 "hdgst": false, 00:29:51.244 "ddgst": false 00:29:51.244 }, 00:29:51.244 "method": "bdev_nvme_attach_controller" 00:29:51.244 }' 00:29:51.244 20:29:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:51.244 20:29:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:51.244 20:29:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:51.244 20:29:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:51.244 20:29:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:29:51.244 20:29:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:51.244 20:29:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:51.244 20:29:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:51.244 20:29:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:29:51.244 20:29:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:51.244 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:29:51.244 fio-3.35 00:29:51.244 Starting 1 thread 00:30:03.439 00:30:03.439 filename0: (groupid=0, jobs=1): err= 0: pid=116478: Sun Jul 14 20:29:50 2024 00:30:03.439 read: IOPS=1697, BW=6789KiB/s (6952kB/s)(66.5MiB/10037msec) 00:30:03.439 slat (nsec): min=5821, max=42801, avg=7466.36, stdev=3107.04 00:30:03.439 clat (usec): min=347, max=42462, avg=2333.88, stdev=8633.83 00:30:03.439 lat (usec): min=353, max=42471, avg=2341.35, stdev=8633.94 00:30:03.439 clat percentiles (usec): 00:30:03.439 | 1.00th=[ 355], 5.00th=[ 359], 10.00th=[ 367], 20.00th=[ 375], 00:30:03.439 | 30.00th=[ 383], 40.00th=[ 392], 50.00th=[ 400], 60.00th=[ 408], 00:30:03.439 | 70.00th=[ 420], 80.00th=[ 433], 90.00th=[ 461], 95.00th=[ 553], 00:30:03.439 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41681], 00:30:03.439 | 99.99th=[42206] 00:30:03.439 bw ( KiB/s): min= 2080, max=10208, per=100.00%, avg=6812.80, stdev=1932.03, samples=20 00:30:03.439 iops : min= 520, max= 2552, avg=1703.20, stdev=483.01, samples=20 00:30:03.439 lat (usec) : 500=93.77%, 750=1.42%, 1000=0.02% 00:30:03.439 lat (msec) : 4=0.02%, 50=4.77% 00:30:03.439 cpu : usr=90.20%, sys=8.86%, ctx=168, majf=0, minf=0 00:30:03.439 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:03.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:03.439 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:03.439 issued rwts: total=17036,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:03.439 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:03.439 00:30:03.439 Run status group 0 (all jobs): 00:30:03.439 READ: bw=6789KiB/s (6952kB/s), 6789KiB/s-6789KiB/s (6952kB/s-6952kB/s), io=66.5MiB (69.8MB), run=10037-10037msec 00:30:03.439 20:29:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:30:03.439 20:29:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:30:03.440 20:29:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:30:03.440 20:29:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:03.440 20:29:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:30:03.440 20:29:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:03.440 20:29:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:03.440 20:29:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:03.440 20:29:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:03.440 20:29:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:03.440 20:29:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:03.440 20:29:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:03.440 ************************************ 00:30:03.440 END TEST fio_dif_1_default 00:30:03.440 ************************************ 00:30:03.440 20:29:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:03.440 00:30:03.440 real 0m11.184s 00:30:03.440 user 0m9.787s 00:30:03.440 sys 0m1.210s 00:30:03.440 20:29:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:03.440 20:29:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:03.440 20:29:51 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:30:03.440 20:29:51 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:30:03.440 20:29:51 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:03.440 20:29:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:03.440 ************************************ 00:30:03.440 START TEST fio_dif_1_multi_subsystems 00:30:03.440 ************************************ 00:30:03.440 20:29:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1121 -- # fio_dif_1_multi_subsystems 00:30:03.440 20:29:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:30:03.440 20:29:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:30:03.440 20:29:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:30:03.440 20:29:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:30:03.440 20:29:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:30:03.440 20:29:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:30:03.440 20:29:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:03.440 20:29:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:03.440 20:29:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:03.440 bdev_null0 00:30:03.440 20:29:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:03.440 20:29:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:03.440 20:29:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:03.440 20:29:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:03.440 20:29:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:03.440 20:29:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:03.440 20:29:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:03.440 20:29:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:03.440 20:29:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:03.440 20:29:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:03.440 20:29:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:03.440 20:29:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:03.440 [2024-07-14 20:29:51.325118] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:03.440 20:29:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:03.440 20:29:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:30:03.440 20:29:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:30:03.440 20:29:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:30:03.440 20:29:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:30:03.440 20:29:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:03.440 20:29:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:03.440 bdev_null1 00:30:03.440 20:29:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:03.440 20:29:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:03.440 20:29:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:03.440 20:29:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:03.440 20:29:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:03.440 20:29:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:03.440 20:29:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:03.440 20:29:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:03.440 20:29:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:03.440 20:29:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:03.440 20:29:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:03.440 20:29:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:03.440 20:29:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:03.440 20:29:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:30:03.440 20:29:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:30:03.440 20:29:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:30:03.440 20:29:51 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:30:03.440 20:29:51 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:30:03.440 20:29:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:03.440 20:29:51 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:03.440 20:29:51 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:03.440 { 00:30:03.440 "params": { 00:30:03.440 "name": "Nvme$subsystem", 00:30:03.440 "trtype": "$TEST_TRANSPORT", 00:30:03.440 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:03.440 "adrfam": "ipv4", 00:30:03.440 "trsvcid": "$NVMF_PORT", 00:30:03.440 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:03.440 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:03.440 "hdgst": ${hdgst:-false}, 00:30:03.440 "ddgst": ${ddgst:-false} 00:30:03.440 }, 00:30:03.440 "method": "bdev_nvme_attach_controller" 00:30:03.440 } 00:30:03.440 EOF 00:30:03.440 )") 00:30:03.440 20:29:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:03.440 20:29:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:30:03.440 20:29:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:30:03.440 20:29:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:30:03.440 20:29:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:30:03.440 20:29:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:03.440 20:29:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # local sanitizers 00:30:03.440 20:29:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:03.440 20:29:51 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:30:03.440 20:29:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # shift 00:30:03.440 20:29:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local asan_lib= 00:30:03.440 20:29:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:03.440 20:29:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:03.440 20:29:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libasan 00:30:03.440 20:29:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:03.440 20:29:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:30:03.440 20:29:51 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:03.440 20:29:51 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:03.440 { 00:30:03.440 "params": { 00:30:03.440 "name": "Nvme$subsystem", 00:30:03.440 "trtype": "$TEST_TRANSPORT", 00:30:03.440 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:03.440 "adrfam": "ipv4", 00:30:03.440 "trsvcid": "$NVMF_PORT", 00:30:03.440 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:03.440 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:03.440 "hdgst": ${hdgst:-false}, 00:30:03.440 "ddgst": ${ddgst:-false} 00:30:03.440 }, 00:30:03.440 "method": "bdev_nvme_attach_controller" 00:30:03.440 } 00:30:03.440 EOF 00:30:03.440 )") 00:30:03.440 20:29:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:30:03.440 20:29:51 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:30:03.440 20:29:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:30:03.440 20:29:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:30:03.440 20:29:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:30:03.440 20:29:51 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:30:03.440 20:29:51 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:30:03.440 20:29:51 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:03.440 "params": { 00:30:03.440 "name": "Nvme0", 00:30:03.440 "trtype": "tcp", 00:30:03.440 "traddr": "10.0.0.2", 00:30:03.440 "adrfam": "ipv4", 00:30:03.440 "trsvcid": "4420", 00:30:03.440 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:03.440 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:03.440 "hdgst": false, 00:30:03.440 "ddgst": false 00:30:03.440 }, 00:30:03.440 "method": "bdev_nvme_attach_controller" 00:30:03.440 },{ 00:30:03.440 "params": { 00:30:03.440 "name": "Nvme1", 00:30:03.440 "trtype": "tcp", 00:30:03.440 "traddr": "10.0.0.2", 00:30:03.440 "adrfam": "ipv4", 00:30:03.440 "trsvcid": "4420", 00:30:03.441 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:03.441 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:03.441 "hdgst": false, 00:30:03.441 "ddgst": false 00:30:03.441 }, 00:30:03.441 "method": "bdev_nvme_attach_controller" 00:30:03.441 }' 00:30:03.441 20:29:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:03.441 20:29:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:03.441 20:29:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:03.441 20:29:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:03.441 20:29:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:03.441 20:29:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:30:03.441 20:29:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:03.441 20:29:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:03.441 20:29:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:30:03.441 20:29:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:03.441 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:03.441 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:03.441 fio-3.35 00:30:03.441 Starting 2 threads 00:30:13.408 00:30:13.408 filename0: (groupid=0, jobs=1): err= 0: pid=116632: Sun Jul 14 20:30:02 2024 00:30:13.408 read: IOPS=221, BW=886KiB/s (908kB/s)(8864KiB/10001msec) 00:30:13.408 slat (usec): min=6, max=127, avg= 8.04, stdev= 3.99 00:30:13.408 clat (usec): min=364, max=41559, avg=18027.08, stdev=20066.32 00:30:13.408 lat (usec): min=370, max=41585, avg=18035.12, stdev=20066.41 00:30:13.408 clat percentiles (usec): 00:30:13.408 | 1.00th=[ 375], 5.00th=[ 383], 10.00th=[ 388], 20.00th=[ 400], 00:30:13.408 | 30.00th=[ 408], 40.00th=[ 424], 50.00th=[ 461], 60.00th=[40633], 00:30:13.408 | 70.00th=[40633], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:30:13.408 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:30:13.408 | 99.99th=[41681] 00:30:13.408 bw ( KiB/s): min= 608, max= 1216, per=50.32%, avg=867.26, stdev=198.74, samples=19 00:30:13.408 iops : min= 152, max= 304, avg=216.79, stdev=49.70, samples=19 00:30:13.408 lat (usec) : 500=53.70%, 750=2.57%, 1000=0.05% 00:30:13.408 lat (msec) : 2=0.18%, 50=43.50% 00:30:13.408 cpu : usr=95.23%, sys=4.10%, ctx=103, majf=0, minf=9 00:30:13.408 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:13.408 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:13.408 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:13.408 issued rwts: total=2216,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:13.408 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:13.408 filename1: (groupid=0, jobs=1): err= 0: pid=116633: Sun Jul 14 20:30:02 2024 00:30:13.408 read: IOPS=209, BW=839KiB/s (859kB/s)(8416KiB/10029msec) 00:30:13.408 slat (nsec): min=6223, max=48056, avg=7780.97, stdev=3031.44 00:30:13.408 clat (usec): min=370, max=41568, avg=19041.76, stdev=20173.69 00:30:13.408 lat (usec): min=376, max=41578, avg=19049.54, stdev=20173.65 00:30:13.408 clat percentiles (usec): 00:30:13.408 | 1.00th=[ 375], 5.00th=[ 383], 10.00th=[ 392], 20.00th=[ 400], 00:30:13.408 | 30.00th=[ 412], 40.00th=[ 429], 50.00th=[ 494], 60.00th=[40633], 00:30:13.408 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:30:13.408 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:30:13.408 | 99.99th=[41681] 00:30:13.408 bw ( KiB/s): min= 512, max= 1216, per=48.69%, avg=839.90, stdev=184.27, samples=20 00:30:13.408 iops : min= 128, max= 304, avg=209.95, stdev=46.08, samples=20 00:30:13.408 lat (usec) : 500=50.33%, 750=3.47% 00:30:13.408 lat (msec) : 2=0.19%, 50=46.01% 00:30:13.408 cpu : usr=95.69%, sys=3.95%, ctx=11, majf=0, minf=0 00:30:13.408 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:13.408 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:13.408 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:13.408 issued rwts: total=2104,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:13.408 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:13.408 00:30:13.408 Run status group 0 (all jobs): 00:30:13.408 READ: bw=1723KiB/s (1764kB/s), 839KiB/s-886KiB/s (859kB/s-908kB/s), io=16.9MiB (17.7MB), run=10001-10029msec 00:30:13.408 20:30:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:30:13.408 20:30:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:30:13.408 20:30:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:30:13.408 20:30:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:13.408 20:30:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:30:13.408 20:30:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:13.408 20:30:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:13.409 20:30:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:13.409 20:30:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:13.409 20:30:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:13.409 20:30:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:13.409 20:30:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:13.409 20:30:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:13.409 20:30:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:30:13.409 20:30:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:13.409 20:30:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:30:13.409 20:30:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:13.409 20:30:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:13.409 20:30:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:13.409 20:30:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:13.409 20:30:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:13.409 20:30:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:13.409 20:30:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:13.409 ************************************ 00:30:13.409 END TEST fio_dif_1_multi_subsystems 00:30:13.409 ************************************ 00:30:13.409 20:30:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:13.409 00:30:13.409 real 0m11.176s 00:30:13.409 user 0m19.901s 00:30:13.409 sys 0m1.103s 00:30:13.409 20:30:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:13.409 20:30:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:13.667 20:30:02 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:30:13.667 20:30:02 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:30:13.667 20:30:02 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:13.667 20:30:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:13.667 ************************************ 00:30:13.667 START TEST fio_dif_rand_params 00:30:13.667 ************************************ 00:30:13.667 20:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1121 -- # fio_dif_rand_params 00:30:13.667 20:30:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:30:13.667 20:30:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:30:13.667 20:30:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:30:13.667 20:30:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:30:13.667 20:30:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:30:13.667 20:30:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:30:13.667 20:30:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:30:13.668 20:30:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:30:13.668 20:30:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:30:13.668 20:30:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:13.668 20:30:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:30:13.668 20:30:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:30:13.668 20:30:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:30:13.668 20:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:13.668 20:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:13.668 bdev_null0 00:30:13.668 20:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:13.668 20:30:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:13.668 20:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:13.668 20:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:13.668 20:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:13.668 20:30:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:13.668 20:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:13.668 20:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:13.668 20:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:13.668 20:30:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:13.668 20:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:13.668 20:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:13.668 [2024-07-14 20:30:02.567654] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:13.668 20:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:13.668 20:30:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:30:13.668 20:30:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:30:13.668 20:30:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:13.668 20:30:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:30:13.668 20:30:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:30:13.668 20:30:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:13.668 20:30:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:13.668 20:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:13.668 20:30:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:13.668 { 00:30:13.668 "params": { 00:30:13.668 "name": "Nvme$subsystem", 00:30:13.668 "trtype": "$TEST_TRANSPORT", 00:30:13.668 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:13.668 "adrfam": "ipv4", 00:30:13.668 "trsvcid": "$NVMF_PORT", 00:30:13.668 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:13.668 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:13.668 "hdgst": ${hdgst:-false}, 00:30:13.668 "ddgst": ${ddgst:-false} 00:30:13.668 }, 00:30:13.668 "method": "bdev_nvme_attach_controller" 00:30:13.668 } 00:30:13.668 EOF 00:30:13.668 )") 00:30:13.668 20:30:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:30:13.668 20:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:30:13.668 20:30:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:30:13.668 20:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:13.668 20:30:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:30:13.668 20:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:30:13.668 20:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:13.668 20:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:30:13.668 20:30:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:13.668 20:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:30:13.668 20:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:13.668 20:30:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:30:13.668 20:30:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:13.668 20:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:30:13.668 20:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:13.668 20:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:13.668 20:30:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:30:13.668 20:30:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:30:13.668 20:30:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:13.668 "params": { 00:30:13.668 "name": "Nvme0", 00:30:13.668 "trtype": "tcp", 00:30:13.668 "traddr": "10.0.0.2", 00:30:13.668 "adrfam": "ipv4", 00:30:13.668 "trsvcid": "4420", 00:30:13.668 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:13.668 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:13.668 "hdgst": false, 00:30:13.668 "ddgst": false 00:30:13.668 }, 00:30:13.668 "method": "bdev_nvme_attach_controller" 00:30:13.668 }' 00:30:13.668 20:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:13.668 20:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:13.668 20:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:13.668 20:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:13.668 20:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:30:13.668 20:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:13.668 20:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:13.668 20:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:13.668 20:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:30:13.668 20:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:13.926 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:30:13.926 ... 00:30:13.926 fio-3.35 00:30:13.926 Starting 3 threads 00:30:20.483 00:30:20.483 filename0: (groupid=0, jobs=1): err= 0: pid=116789: Sun Jul 14 20:30:08 2024 00:30:20.483 read: IOPS=249, BW=31.2MiB/s (32.7MB/s)(156MiB/5002msec) 00:30:20.483 slat (nsec): min=6278, max=52745, avg=12202.24, stdev=6118.04 00:30:20.483 clat (usec): min=3482, max=53023, avg=12014.14, stdev=10616.21 00:30:20.483 lat (usec): min=3491, max=53035, avg=12026.34, stdev=10616.39 00:30:20.483 clat percentiles (usec): 00:30:20.483 | 1.00th=[ 3916], 5.00th=[ 5342], 10.00th=[ 6325], 20.00th=[ 6980], 00:30:20.483 | 30.00th=[ 7373], 40.00th=[ 8160], 50.00th=[10159], 60.00th=[10945], 00:30:20.483 | 70.00th=[11469], 80.00th=[11863], 90.00th=[12911], 95.00th=[48497], 00:30:20.483 | 99.00th=[52167], 99.50th=[52167], 99.90th=[53216], 99.95th=[53216], 00:30:20.483 | 99.99th=[53216] 00:30:20.483 bw ( KiB/s): min=22272, max=39168, per=32.47%, avg=32085.33, stdev=5191.50, samples=9 00:30:20.483 iops : min= 174, max= 306, avg=250.67, stdev=40.56, samples=9 00:30:20.483 lat (msec) : 4=2.81%, 10=46.27%, 20=43.95%, 50=3.85%, 100=3.13% 00:30:20.483 cpu : usr=93.72%, sys=4.76%, ctx=10, majf=0, minf=0 00:30:20.483 IO depths : 1=3.8%, 2=96.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:20.483 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:20.483 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:20.483 issued rwts: total=1247,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:20.483 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:20.483 filename0: (groupid=0, jobs=1): err= 0: pid=116790: Sun Jul 14 20:30:08 2024 00:30:20.483 read: IOPS=213, BW=26.6MiB/s (27.9MB/s)(134MiB/5014msec) 00:30:20.483 slat (nsec): min=6392, max=70418, avg=15396.87, stdev=7887.59 00:30:20.483 clat (usec): min=3704, max=52860, avg=14045.54, stdev=13587.52 00:30:20.483 lat (usec): min=3714, max=52866, avg=14060.94, stdev=13587.19 00:30:20.483 clat percentiles (usec): 00:30:20.483 | 1.00th=[ 3949], 5.00th=[ 6587], 10.00th=[ 6980], 20.00th=[ 7504], 00:30:20.483 | 30.00th=[ 8586], 40.00th=[ 9110], 50.00th=[ 9503], 60.00th=[ 9896], 00:30:20.483 | 70.00th=[10159], 80.00th=[10683], 90.00th=[48497], 95.00th=[50070], 00:30:20.483 | 99.00th=[51643], 99.50th=[52167], 99.90th=[52691], 99.95th=[52691], 00:30:20.483 | 99.99th=[52691] 00:30:20.483 bw ( KiB/s): min=21248, max=33024, per=27.61%, avg=27289.60, stdev=4419.57, samples=10 00:30:20.483 iops : min= 166, max= 258, avg=213.20, stdev=34.53, samples=10 00:30:20.483 lat (msec) : 4=1.12%, 10=61.93%, 20=24.32%, 50=7.58%, 100=5.05% 00:30:20.483 cpu : usr=94.73%, sys=4.09%, ctx=15, majf=0, minf=0 00:30:20.483 IO depths : 1=4.9%, 2=95.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:20.483 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:20.483 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:20.483 issued rwts: total=1069,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:20.483 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:20.483 filename0: (groupid=0, jobs=1): err= 0: pid=116791: Sun Jul 14 20:30:08 2024 00:30:20.483 read: IOPS=310, BW=38.9MiB/s (40.7MB/s)(194MiB/5002msec) 00:30:20.483 slat (nsec): min=6233, max=48430, avg=10810.69, stdev=5663.56 00:30:20.483 clat (usec): min=3366, max=53027, avg=9627.16, stdev=5585.10 00:30:20.483 lat (usec): min=3376, max=53033, avg=9637.97, stdev=5585.72 00:30:20.483 clat percentiles (usec): 00:30:20.483 | 1.00th=[ 3720], 5.00th=[ 3884], 10.00th=[ 3982], 20.00th=[ 4293], 00:30:20.483 | 30.00th=[ 7767], 40.00th=[ 8455], 50.00th=[ 8979], 60.00th=[ 9765], 00:30:20.483 | 70.00th=[12256], 80.00th=[13173], 90.00th=[13829], 95.00th=[14353], 00:30:20.483 | 99.00th=[45876], 99.50th=[49546], 99.90th=[52691], 99.95th=[53216], 00:30:20.483 | 99.99th=[53216] 00:30:20.483 bw ( KiB/s): min=33792, max=46848, per=40.23%, avg=39756.80, stdev=4712.76, samples=10 00:30:20.483 iops : min= 264, max= 366, avg=310.60, stdev=36.82, samples=10 00:30:20.483 lat (msec) : 4=11.70%, 10=49.71%, 20=37.43%, 50=0.77%, 100=0.39% 00:30:20.483 cpu : usr=92.88%, sys=5.32%, ctx=48, majf=0, minf=0 00:30:20.483 IO depths : 1=21.6%, 2=78.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:20.483 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:20.483 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:20.483 issued rwts: total=1555,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:20.483 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:20.483 00:30:20.483 Run status group 0 (all jobs): 00:30:20.483 READ: bw=96.5MiB/s (101MB/s), 26.6MiB/s-38.9MiB/s (27.9MB/s-40.7MB/s), io=484MiB (507MB), run=5002-5014msec 00:30:20.483 20:30:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:30:20.483 20:30:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:30:20.483 20:30:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:20.483 20:30:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:20.483 20:30:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:30:20.483 20:30:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:20.483 20:30:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:20.483 20:30:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:20.483 20:30:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:20.483 20:30:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:20.483 20:30:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:20.483 20:30:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:20.483 20:30:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:20.483 20:30:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:30:20.483 20:30:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:30:20.483 20:30:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:30:20.483 20:30:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:30:20.483 20:30:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:30:20.483 20:30:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:30:20.483 20:30:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:30:20.483 20:30:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:30:20.483 20:30:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:20.483 20:30:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:30:20.483 20:30:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:30:20.483 20:30:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:30:20.483 20:30:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:20.483 20:30:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:20.483 bdev_null0 00:30:20.483 20:30:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:20.483 20:30:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:20.483 20:30:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:20.483 20:30:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:20.483 20:30:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:20.483 20:30:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:20.483 20:30:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:20.483 20:30:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:20.483 20:30:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:20.483 20:30:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:20.483 20:30:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:20.483 20:30:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:20.484 [2024-07-14 20:30:08.722727] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:20.484 20:30:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:20.484 20:30:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:20.484 20:30:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:30:20.484 20:30:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:30:20.484 20:30:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:30:20.484 20:30:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:20.484 20:30:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:20.484 bdev_null1 00:30:20.484 20:30:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:20.484 20:30:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:20.484 20:30:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:20.484 20:30:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:20.484 20:30:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:20.484 20:30:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:20.484 20:30:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:20.484 20:30:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:20.484 20:30:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:20.484 20:30:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:20.484 20:30:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:20.484 20:30:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:20.484 20:30:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:20.484 20:30:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:20.484 20:30:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:30:20.484 20:30:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:30:20.484 20:30:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:30:20.484 20:30:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:20.484 20:30:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:20.484 bdev_null2 00:30:20.484 20:30:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:20.484 20:30:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:30:20.484 20:30:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:20.484 20:30:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:20.484 20:30:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:20.484 20:30:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:30:20.484 20:30:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:20.484 20:30:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:20.484 20:30:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:20.484 20:30:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:20.484 20:30:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:20.484 20:30:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:20.484 20:30:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:20.484 20:30:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:30:20.484 20:30:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:30:20.484 20:30:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:30:20.484 20:30:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:30:20.484 20:30:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:30:20.484 20:30:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:20.484 20:30:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:20.484 { 00:30:20.484 "params": { 00:30:20.484 "name": "Nvme$subsystem", 00:30:20.484 "trtype": "$TEST_TRANSPORT", 00:30:20.484 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:20.484 "adrfam": "ipv4", 00:30:20.484 "trsvcid": "$NVMF_PORT", 00:30:20.484 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:20.484 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:20.484 "hdgst": ${hdgst:-false}, 00:30:20.484 "ddgst": ${ddgst:-false} 00:30:20.484 }, 00:30:20.484 "method": "bdev_nvme_attach_controller" 00:30:20.484 } 00:30:20.484 EOF 00:30:20.484 )") 00:30:20.484 20:30:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:20.484 20:30:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:30:20.484 20:30:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:20.484 20:30:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:30:20.484 20:30:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:30:20.484 20:30:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:30:20.484 20:30:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:20.484 20:30:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:20.484 20:30:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:30:20.484 20:30:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:20.484 20:30:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:30:20.484 20:30:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:30:20.484 20:30:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:20.484 20:30:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:30:20.484 20:30:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:20.484 20:30:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:30:20.484 20:30:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:20.484 20:30:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:20.484 20:30:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:20.484 { 00:30:20.484 "params": { 00:30:20.484 "name": "Nvme$subsystem", 00:30:20.484 "trtype": "$TEST_TRANSPORT", 00:30:20.484 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:20.484 "adrfam": "ipv4", 00:30:20.484 "trsvcid": "$NVMF_PORT", 00:30:20.484 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:20.484 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:20.484 "hdgst": ${hdgst:-false}, 00:30:20.484 "ddgst": ${ddgst:-false} 00:30:20.484 }, 00:30:20.484 "method": "bdev_nvme_attach_controller" 00:30:20.484 } 00:30:20.484 EOF 00:30:20.484 )") 00:30:20.484 20:30:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:30:20.484 20:30:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:20.484 20:30:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:20.484 20:30:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:30:20.484 20:30:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:20.484 20:30:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:30:20.484 20:30:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:20.484 20:30:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:20.484 { 00:30:20.484 "params": { 00:30:20.484 "name": "Nvme$subsystem", 00:30:20.484 "trtype": "$TEST_TRANSPORT", 00:30:20.484 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:20.484 "adrfam": "ipv4", 00:30:20.484 "trsvcid": "$NVMF_PORT", 00:30:20.484 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:20.484 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:20.484 "hdgst": ${hdgst:-false}, 00:30:20.484 "ddgst": ${ddgst:-false} 00:30:20.484 }, 00:30:20.484 "method": "bdev_nvme_attach_controller" 00:30:20.484 } 00:30:20.484 EOF 00:30:20.484 )") 00:30:20.484 20:30:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:20.484 20:30:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:30:20.484 20:30:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:20.484 20:30:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:30:20.484 20:30:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:30:20.484 20:30:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:20.484 "params": { 00:30:20.484 "name": "Nvme0", 00:30:20.484 "trtype": "tcp", 00:30:20.484 "traddr": "10.0.0.2", 00:30:20.484 "adrfam": "ipv4", 00:30:20.484 "trsvcid": "4420", 00:30:20.484 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:20.484 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:20.484 "hdgst": false, 00:30:20.484 "ddgst": false 00:30:20.484 }, 00:30:20.484 "method": "bdev_nvme_attach_controller" 00:30:20.484 },{ 00:30:20.484 "params": { 00:30:20.484 "name": "Nvme1", 00:30:20.484 "trtype": "tcp", 00:30:20.484 "traddr": "10.0.0.2", 00:30:20.484 "adrfam": "ipv4", 00:30:20.484 "trsvcid": "4420", 00:30:20.484 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:20.484 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:20.484 "hdgst": false, 00:30:20.484 "ddgst": false 00:30:20.484 }, 00:30:20.484 "method": "bdev_nvme_attach_controller" 00:30:20.484 },{ 00:30:20.484 "params": { 00:30:20.484 "name": "Nvme2", 00:30:20.484 "trtype": "tcp", 00:30:20.484 "traddr": "10.0.0.2", 00:30:20.484 "adrfam": "ipv4", 00:30:20.484 "trsvcid": "4420", 00:30:20.484 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:20.484 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:20.484 "hdgst": false, 00:30:20.484 "ddgst": false 00:30:20.484 }, 00:30:20.484 "method": "bdev_nvme_attach_controller" 00:30:20.484 }' 00:30:20.484 20:30:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:20.484 20:30:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:20.484 20:30:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:20.484 20:30:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:20.484 20:30:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:30:20.485 20:30:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:20.485 20:30:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:20.485 20:30:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:20.485 20:30:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:30:20.485 20:30:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:20.485 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:20.485 ... 00:30:20.485 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:20.485 ... 00:30:20.485 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:20.485 ... 00:30:20.485 fio-3.35 00:30:20.485 Starting 24 threads 00:30:32.701 00:30:32.701 filename0: (groupid=0, jobs=1): err= 0: pid=116885: Sun Jul 14 20:30:19 2024 00:30:32.701 read: IOPS=192, BW=772KiB/s (790kB/s)(7724KiB/10010msec) 00:30:32.701 slat (usec): min=4, max=4018, avg=18.47, stdev=157.25 00:30:32.701 clat (msec): min=10, max=171, avg=82.76, stdev=23.29 00:30:32.701 lat (msec): min=10, max=171, avg=82.78, stdev=23.30 00:30:32.701 clat percentiles (msec): 00:30:32.701 | 1.00th=[ 36], 5.00th=[ 52], 10.00th=[ 59], 20.00th=[ 63], 00:30:32.701 | 30.00th=[ 67], 40.00th=[ 72], 50.00th=[ 84], 60.00th=[ 89], 00:30:32.701 | 70.00th=[ 94], 80.00th=[ 101], 90.00th=[ 115], 95.00th=[ 124], 00:30:32.701 | 99.00th=[ 142], 99.50th=[ 165], 99.90th=[ 171], 99.95th=[ 171], 00:30:32.701 | 99.99th=[ 171] 00:30:32.701 bw ( KiB/s): min= 512, max= 1024, per=3.65%, avg=754.53, stdev=126.63, samples=19 00:30:32.701 iops : min= 128, max= 256, avg=188.63, stdev=31.66, samples=19 00:30:32.701 lat (msec) : 20=0.83%, 50=3.21%, 100=76.18%, 250=19.78% 00:30:32.701 cpu : usr=43.95%, sys=0.75%, ctx=1520, majf=0, minf=9 00:30:32.701 IO depths : 1=2.9%, 2=6.4%, 4=17.9%, 8=62.8%, 16=10.0%, 32=0.0%, >=64=0.0% 00:30:32.701 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:32.701 complete : 0=0.0%, 4=91.5%, 8=3.2%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:32.701 issued rwts: total=1931,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:32.701 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:32.701 filename0: (groupid=0, jobs=1): err= 0: pid=116886: Sun Jul 14 20:30:19 2024 00:30:32.702 read: IOPS=233, BW=935KiB/s (958kB/s)(9404KiB/10054msec) 00:30:32.702 slat (usec): min=4, max=8029, avg=21.96, stdev=252.11 00:30:32.702 clat (msec): min=5, max=144, avg=68.25, stdev=19.14 00:30:32.702 lat (msec): min=5, max=144, avg=68.27, stdev=19.14 00:30:32.702 clat percentiles (msec): 00:30:32.702 | 1.00th=[ 10], 5.00th=[ 42], 10.00th=[ 47], 20.00th=[ 56], 00:30:32.702 | 30.00th=[ 61], 40.00th=[ 63], 50.00th=[ 68], 60.00th=[ 71], 00:30:32.702 | 70.00th=[ 72], 80.00th=[ 84], 90.00th=[ 94], 95.00th=[ 104], 00:30:32.702 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 146], 99.95th=[ 146], 00:30:32.702 | 99.99th=[ 146] 00:30:32.702 bw ( KiB/s): min= 768, max= 1349, per=4.52%, avg=934.00, stdev=133.31, samples=20 00:30:32.702 iops : min= 192, max= 337, avg=233.35, stdev=33.25, samples=20 00:30:32.702 lat (msec) : 10=1.23%, 20=0.81%, 50=13.65%, 100=78.31%, 250=6.00% 00:30:32.702 cpu : usr=37.91%, sys=0.85%, ctx=1110, majf=0, minf=9 00:30:32.702 IO depths : 1=0.8%, 2=2.0%, 4=8.9%, 8=75.8%, 16=12.6%, 32=0.0%, >=64=0.0% 00:30:32.702 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:32.702 complete : 0=0.0%, 4=89.7%, 8=5.6%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:32.702 issued rwts: total=2351,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:32.702 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:32.702 filename0: (groupid=0, jobs=1): err= 0: pid=116887: Sun Jul 14 20:30:19 2024 00:30:32.702 read: IOPS=217, BW=871KiB/s (892kB/s)(8736KiB/10029msec) 00:30:32.702 slat (usec): min=4, max=8019, avg=23.42, stdev=296.65 00:30:32.702 clat (msec): min=31, max=155, avg=73.31, stdev=21.89 00:30:32.702 lat (msec): min=31, max=155, avg=73.33, stdev=21.89 00:30:32.702 clat percentiles (msec): 00:30:32.702 | 1.00th=[ 35], 5.00th=[ 39], 10.00th=[ 48], 20.00th=[ 54], 00:30:32.702 | 30.00th=[ 61], 40.00th=[ 66], 50.00th=[ 72], 60.00th=[ 75], 00:30:32.702 | 70.00th=[ 85], 80.00th=[ 94], 90.00th=[ 105], 95.00th=[ 109], 00:30:32.702 | 99.00th=[ 133], 99.50th=[ 144], 99.90th=[ 157], 99.95th=[ 157], 00:30:32.702 | 99.99th=[ 157] 00:30:32.702 bw ( KiB/s): min= 640, max= 1200, per=4.20%, avg=867.20, stdev=144.52, samples=20 00:30:32.702 iops : min= 160, max= 300, avg=216.80, stdev=36.13, samples=20 00:30:32.702 lat (msec) : 50=16.39%, 100=71.52%, 250=12.09% 00:30:32.702 cpu : usr=34.71%, sys=0.62%, ctx=985, majf=0, minf=9 00:30:32.702 IO depths : 1=1.1%, 2=2.2%, 4=9.9%, 8=74.4%, 16=12.5%, 32=0.0%, >=64=0.0% 00:30:32.702 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:32.702 complete : 0=0.0%, 4=89.7%, 8=5.7%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:32.702 issued rwts: total=2184,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:32.702 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:32.702 filename0: (groupid=0, jobs=1): err= 0: pid=116888: Sun Jul 14 20:30:19 2024 00:30:32.702 read: IOPS=259, BW=1040KiB/s (1065kB/s)(10.2MiB/10062msec) 00:30:32.702 slat (usec): min=4, max=8022, avg=14.43, stdev=156.81 00:30:32.702 clat (msec): min=9, max=137, avg=61.39, stdev=21.23 00:30:32.702 lat (msec): min=9, max=137, avg=61.40, stdev=21.23 00:30:32.702 clat percentiles (msec): 00:30:32.702 | 1.00th=[ 13], 5.00th=[ 34], 10.00th=[ 40], 20.00th=[ 45], 00:30:32.702 | 30.00th=[ 48], 40.00th=[ 55], 50.00th=[ 60], 60.00th=[ 64], 00:30:32.702 | 70.00th=[ 70], 80.00th=[ 78], 90.00th=[ 91], 95.00th=[ 101], 00:30:32.702 | 99.00th=[ 126], 99.50th=[ 138], 99.90th=[ 138], 99.95th=[ 138], 00:30:32.702 | 99.99th=[ 138] 00:30:32.702 bw ( KiB/s): min= 640, max= 1584, per=5.03%, avg=1039.60, stdev=243.38, samples=20 00:30:32.702 iops : min= 160, max= 396, avg=259.90, stdev=60.84, samples=20 00:30:32.702 lat (msec) : 10=0.23%, 20=1.11%, 50=32.05%, 100=61.68%, 250=4.93% 00:30:32.702 cpu : usr=44.57%, sys=0.74%, ctx=1313, majf=0, minf=9 00:30:32.702 IO depths : 1=0.9%, 2=2.1%, 4=10.0%, 8=74.8%, 16=12.2%, 32=0.0%, >=64=0.0% 00:30:32.702 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:32.702 complete : 0=0.0%, 4=89.9%, 8=5.1%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:32.702 issued rwts: total=2615,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:32.702 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:32.702 filename0: (groupid=0, jobs=1): err= 0: pid=116889: Sun Jul 14 20:30:19 2024 00:30:32.702 read: IOPS=211, BW=846KiB/s (866kB/s)(8484KiB/10034msec) 00:30:32.702 slat (usec): min=6, max=8043, avg=16.15, stdev=174.52 00:30:32.702 clat (msec): min=27, max=167, avg=75.55, stdev=23.92 00:30:32.702 lat (msec): min=27, max=167, avg=75.56, stdev=23.91 00:30:32.702 clat percentiles (msec): 00:30:32.702 | 1.00th=[ 36], 5.00th=[ 43], 10.00th=[ 48], 20.00th=[ 57], 00:30:32.702 | 30.00th=[ 61], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 75], 00:30:32.702 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 121], 00:30:32.702 | 99.00th=[ 138], 99.50th=[ 157], 99.90th=[ 167], 99.95th=[ 167], 00:30:32.702 | 99.99th=[ 167] 00:30:32.702 bw ( KiB/s): min= 560, max= 1104, per=4.08%, avg=842.00, stdev=175.96, samples=20 00:30:32.702 iops : min= 140, max= 276, avg=210.50, stdev=43.99, samples=20 00:30:32.702 lat (msec) : 50=13.34%, 100=72.56%, 250=14.10% 00:30:32.702 cpu : usr=33.99%, sys=0.57%, ctx=909, majf=0, minf=9 00:30:32.702 IO depths : 1=0.7%, 2=1.6%, 4=7.5%, 8=77.2%, 16=13.1%, 32=0.0%, >=64=0.0% 00:30:32.702 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:32.702 complete : 0=0.0%, 4=89.6%, 8=6.2%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:32.702 issued rwts: total=2121,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:32.702 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:32.702 filename0: (groupid=0, jobs=1): err= 0: pid=116890: Sun Jul 14 20:30:19 2024 00:30:32.702 read: IOPS=187, BW=749KiB/s (767kB/s)(7492KiB/10005msec) 00:30:32.702 slat (nsec): min=4757, max=76638, avg=12749.50, stdev=7657.11 00:30:32.702 clat (msec): min=12, max=155, avg=85.39, stdev=25.26 00:30:32.702 lat (msec): min=12, max=155, avg=85.40, stdev=25.26 00:30:32.702 clat percentiles (msec): 00:30:32.702 | 1.00th=[ 28], 5.00th=[ 51], 10.00th=[ 59], 20.00th=[ 64], 00:30:32.702 | 30.00th=[ 70], 40.00th=[ 77], 50.00th=[ 84], 60.00th=[ 88], 00:30:32.702 | 70.00th=[ 96], 80.00th=[ 109], 90.00th=[ 121], 95.00th=[ 132], 00:30:32.702 | 99.00th=[ 144], 99.50th=[ 153], 99.90th=[ 157], 99.95th=[ 157], 00:30:32.702 | 99.99th=[ 157] 00:30:32.702 bw ( KiB/s): min= 560, max= 992, per=3.55%, avg=734.79, stdev=132.85, samples=19 00:30:32.702 iops : min= 140, max= 248, avg=183.68, stdev=33.21, samples=19 00:30:32.702 lat (msec) : 20=0.85%, 50=3.68%, 100=69.25%, 250=26.21% 00:30:32.702 cpu : usr=35.86%, sys=0.71%, ctx=990, majf=0, minf=9 00:30:32.702 IO depths : 1=1.5%, 2=3.3%, 4=9.9%, 8=71.9%, 16=13.3%, 32=0.0%, >=64=0.0% 00:30:32.702 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:32.702 complete : 0=0.0%, 4=90.5%, 8=6.1%, 16=3.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:32.702 issued rwts: total=1873,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:32.702 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:32.702 filename0: (groupid=0, jobs=1): err= 0: pid=116891: Sun Jul 14 20:30:19 2024 00:30:32.702 read: IOPS=193, BW=773KiB/s (792kB/s)(7752KiB/10022msec) 00:30:32.702 slat (usec): min=4, max=12021, avg=31.99, stdev=416.81 00:30:32.702 clat (msec): min=24, max=180, avg=82.52, stdev=24.62 00:30:32.702 lat (msec): min=24, max=180, avg=82.55, stdev=24.62 00:30:32.702 clat percentiles (msec): 00:30:32.702 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 57], 20.00th=[ 62], 00:30:32.702 | 30.00th=[ 69], 40.00th=[ 72], 50.00th=[ 83], 60.00th=[ 87], 00:30:32.702 | 70.00th=[ 95], 80.00th=[ 101], 90.00th=[ 118], 95.00th=[ 130], 00:30:32.702 | 99.00th=[ 146], 99.50th=[ 165], 99.90th=[ 182], 99.95th=[ 182], 00:30:32.702 | 99.99th=[ 182] 00:30:32.702 bw ( KiB/s): min= 552, max= 1024, per=3.72%, avg=768.80, stdev=134.16, samples=20 00:30:32.702 iops : min= 138, max= 256, avg=192.20, stdev=33.54, samples=20 00:30:32.702 lat (msec) : 50=7.59%, 100=73.12%, 250=19.30% 00:30:32.702 cpu : usr=36.02%, sys=0.73%, ctx=961, majf=0, minf=9 00:30:32.702 IO depths : 1=2.7%, 2=5.6%, 4=15.5%, 8=65.7%, 16=10.4%, 32=0.0%, >=64=0.0% 00:30:32.702 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:32.702 complete : 0=0.0%, 4=91.4%, 8=3.5%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:32.702 issued rwts: total=1938,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:32.702 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:32.702 filename0: (groupid=0, jobs=1): err= 0: pid=116892: Sun Jul 14 20:30:19 2024 00:30:32.702 read: IOPS=232, BW=929KiB/s (951kB/s)(9316KiB/10031msec) 00:30:32.702 slat (usec): min=5, max=8025, avg=18.15, stdev=221.37 00:30:32.702 clat (msec): min=23, max=161, avg=68.78, stdev=23.85 00:30:32.702 lat (msec): min=23, max=161, avg=68.80, stdev=23.86 00:30:32.702 clat percentiles (msec): 00:30:32.702 | 1.00th=[ 34], 5.00th=[ 36], 10.00th=[ 44], 20.00th=[ 48], 00:30:32.702 | 30.00th=[ 56], 40.00th=[ 59], 50.00th=[ 63], 60.00th=[ 70], 00:30:32.702 | 70.00th=[ 79], 80.00th=[ 89], 90.00th=[ 99], 95.00th=[ 114], 00:30:32.702 | 99.00th=[ 140], 99.50th=[ 148], 99.90th=[ 163], 99.95th=[ 163], 00:30:32.702 | 99.99th=[ 163] 00:30:32.702 bw ( KiB/s): min= 600, max= 1152, per=4.48%, avg=925.05, stdev=185.95, samples=20 00:30:32.702 iops : min= 150, max= 288, avg=231.25, stdev=46.47, samples=20 00:30:32.702 lat (msec) : 50=23.79%, 100=67.02%, 250=9.19% 00:30:32.702 cpu : usr=32.66%, sys=0.56%, ctx=868, majf=0, minf=9 00:30:32.702 IO depths : 1=0.9%, 2=2.0%, 4=8.2%, 8=76.3%, 16=12.6%, 32=0.0%, >=64=0.0% 00:30:32.702 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:32.702 complete : 0=0.0%, 4=89.7%, 8=5.8%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:32.702 issued rwts: total=2329,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:32.702 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:32.702 filename1: (groupid=0, jobs=1): err= 0: pid=116893: Sun Jul 14 20:30:19 2024 00:30:32.702 read: IOPS=203, BW=813KiB/s (833kB/s)(8136KiB/10006msec) 00:30:32.702 slat (usec): min=4, max=8036, avg=28.48, stdev=355.20 00:30:32.702 clat (msec): min=34, max=175, avg=78.49, stdev=26.18 00:30:32.702 lat (msec): min=34, max=175, avg=78.52, stdev=26.19 00:30:32.702 clat percentiles (msec): 00:30:32.702 | 1.00th=[ 36], 5.00th=[ 43], 10.00th=[ 48], 20.00th=[ 59], 00:30:32.702 | 30.00th=[ 62], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 83], 00:30:32.702 | 70.00th=[ 88], 80.00th=[ 96], 90.00th=[ 117], 95.00th=[ 132], 00:30:32.702 | 99.00th=[ 148], 99.50th=[ 169], 99.90th=[ 176], 99.95th=[ 176], 00:30:32.702 | 99.99th=[ 176] 00:30:32.702 bw ( KiB/s): min= 512, max= 1120, per=3.83%, avg=790.32, stdev=168.88, samples=19 00:30:32.702 iops : min= 128, max= 280, avg=197.58, stdev=42.22, samples=19 00:30:32.702 lat (msec) : 50=11.60%, 100=70.60%, 250=17.80% 00:30:32.702 cpu : usr=33.05%, sys=0.65%, ctx=876, majf=0, minf=9 00:30:32.702 IO depths : 1=1.7%, 2=3.9%, 4=12.7%, 8=70.5%, 16=11.2%, 32=0.0%, >=64=0.0% 00:30:32.702 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:32.702 complete : 0=0.0%, 4=90.8%, 8=4.0%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:32.702 issued rwts: total=2034,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:32.702 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:32.702 filename1: (groupid=0, jobs=1): err= 0: pid=116894: Sun Jul 14 20:30:19 2024 00:30:32.702 read: IOPS=241, BW=964KiB/s (987kB/s)(9712KiB/10071msec) 00:30:32.703 slat (usec): min=3, max=8026, avg=25.11, stdev=276.45 00:30:32.703 clat (msec): min=2, max=147, avg=66.03, stdev=26.39 00:30:32.703 lat (msec): min=2, max=147, avg=66.06, stdev=26.39 00:30:32.703 clat percentiles (msec): 00:30:32.703 | 1.00th=[ 9], 5.00th=[ 34], 10.00th=[ 39], 20.00th=[ 45], 00:30:32.703 | 30.00th=[ 49], 40.00th=[ 56], 50.00th=[ 63], 60.00th=[ 69], 00:30:32.703 | 70.00th=[ 77], 80.00th=[ 88], 90.00th=[ 106], 95.00th=[ 116], 00:30:32.703 | 99.00th=[ 132], 99.50th=[ 138], 99.90th=[ 148], 99.95th=[ 148], 00:30:32.703 | 99.99th=[ 148] 00:30:32.703 bw ( KiB/s): min= 512, max= 1496, per=4.67%, avg=964.45, stdev=268.69, samples=20 00:30:32.703 iops : min= 128, max= 374, avg=241.05, stdev=67.24, samples=20 00:30:32.703 lat (msec) : 4=0.66%, 10=0.66%, 20=1.32%, 50=29.90%, 100=55.72% 00:30:32.703 lat (msec) : 250=11.74% 00:30:32.703 cpu : usr=39.84%, sys=0.66%, ctx=1170, majf=0, minf=9 00:30:32.703 IO depths : 1=0.4%, 2=0.8%, 4=6.8%, 8=78.1%, 16=13.8%, 32=0.0%, >=64=0.0% 00:30:32.703 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:32.703 complete : 0=0.0%, 4=89.5%, 8=6.7%, 16=3.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:32.703 issued rwts: total=2428,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:32.703 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:32.703 filename1: (groupid=0, jobs=1): err= 0: pid=116895: Sun Jul 14 20:30:19 2024 00:30:32.703 read: IOPS=236, BW=947KiB/s (970kB/s)(9492KiB/10023msec) 00:30:32.703 slat (usec): min=4, max=12034, avg=20.22, stdev=267.83 00:30:32.703 clat (msec): min=24, max=151, avg=67.41, stdev=21.72 00:30:32.703 lat (msec): min=24, max=151, avg=67.43, stdev=21.72 00:30:32.703 clat percentiles (msec): 00:30:32.703 | 1.00th=[ 36], 5.00th=[ 39], 10.00th=[ 43], 20.00th=[ 47], 00:30:32.703 | 30.00th=[ 55], 40.00th=[ 62], 50.00th=[ 64], 60.00th=[ 69], 00:30:32.703 | 70.00th=[ 74], 80.00th=[ 85], 90.00th=[ 100], 95.00th=[ 111], 00:30:32.703 | 99.00th=[ 131], 99.50th=[ 150], 99.90th=[ 150], 99.95th=[ 153], 00:30:32.703 | 99.99th=[ 153] 00:30:32.703 bw ( KiB/s): min= 768, max= 1384, per=4.56%, avg=942.55, stdev=177.42, samples=20 00:30:32.703 iops : min= 192, max= 346, avg=235.60, stdev=44.33, samples=20 00:30:32.703 lat (msec) : 50=23.94%, 100=67.00%, 250=9.06% 00:30:32.703 cpu : usr=44.50%, sys=0.97%, ctx=1616, majf=0, minf=9 00:30:32.703 IO depths : 1=1.3%, 2=2.7%, 4=9.4%, 8=74.5%, 16=12.1%, 32=0.0%, >=64=0.0% 00:30:32.703 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:32.703 complete : 0=0.0%, 4=89.8%, 8=5.6%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:32.703 issued rwts: total=2373,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:32.703 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:32.703 filename1: (groupid=0, jobs=1): err= 0: pid=116896: Sun Jul 14 20:30:19 2024 00:30:32.703 read: IOPS=215, BW=861KiB/s (882kB/s)(8640KiB/10030msec) 00:30:32.703 slat (usec): min=3, max=8029, avg=25.52, stdev=302.42 00:30:32.703 clat (msec): min=26, max=166, avg=74.13, stdev=21.81 00:30:32.703 lat (msec): min=26, max=166, avg=74.15, stdev=21.81 00:30:32.703 clat percentiles (msec): 00:30:32.703 | 1.00th=[ 35], 5.00th=[ 41], 10.00th=[ 47], 20.00th=[ 58], 00:30:32.703 | 30.00th=[ 61], 40.00th=[ 66], 50.00th=[ 72], 60.00th=[ 81], 00:30:32.703 | 70.00th=[ 85], 80.00th=[ 95], 90.00th=[ 104], 95.00th=[ 109], 00:30:32.703 | 99.00th=[ 132], 99.50th=[ 133], 99.90th=[ 167], 99.95th=[ 167], 00:30:32.703 | 99.99th=[ 167] 00:30:32.703 bw ( KiB/s): min= 640, max= 1200, per=4.15%, avg=857.60, stdev=150.13, samples=20 00:30:32.703 iops : min= 160, max= 300, avg=214.40, stdev=37.53, samples=20 00:30:32.703 lat (msec) : 50=14.81%, 100=74.26%, 250=10.93% 00:30:32.703 cpu : usr=33.87%, sys=0.62%, ctx=925, majf=0, minf=9 00:30:32.703 IO depths : 1=1.2%, 2=2.9%, 4=11.4%, 8=72.2%, 16=12.4%, 32=0.0%, >=64=0.0% 00:30:32.703 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:32.703 complete : 0=0.0%, 4=90.2%, 8=5.2%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:32.703 issued rwts: total=2160,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:32.703 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:32.703 filename1: (groupid=0, jobs=1): err= 0: pid=116897: Sun Jul 14 20:30:19 2024 00:30:32.703 read: IOPS=192, BW=768KiB/s (786kB/s)(7684KiB/10005msec) 00:30:32.703 slat (usec): min=3, max=4128, avg=19.43, stdev=149.19 00:30:32.703 clat (msec): min=6, max=159, avg=83.18, stdev=24.15 00:30:32.703 lat (msec): min=6, max=159, avg=83.20, stdev=24.15 00:30:32.703 clat percentiles (msec): 00:30:32.703 | 1.00th=[ 10], 5.00th=[ 48], 10.00th=[ 60], 20.00th=[ 63], 00:30:32.703 | 30.00th=[ 69], 40.00th=[ 74], 50.00th=[ 83], 60.00th=[ 90], 00:30:32.703 | 70.00th=[ 96], 80.00th=[ 103], 90.00th=[ 114], 95.00th=[ 123], 00:30:32.703 | 99.00th=[ 144], 99.50th=[ 146], 99.90th=[ 161], 99.95th=[ 161], 00:30:32.703 | 99.99th=[ 161] 00:30:32.703 bw ( KiB/s): min= 560, max= 936, per=3.62%, avg=747.84, stdev=111.53, samples=19 00:30:32.703 iops : min= 140, max= 234, avg=186.95, stdev=27.87, samples=19 00:30:32.703 lat (msec) : 10=1.35%, 20=0.31%, 50=4.42%, 100=72.62%, 250=21.29% 00:30:32.703 cpu : usr=44.66%, sys=0.74%, ctx=1695, majf=0, minf=9 00:30:32.703 IO depths : 1=2.9%, 2=6.3%, 4=17.5%, 8=63.4%, 16=9.9%, 32=0.0%, >=64=0.0% 00:30:32.703 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:32.703 complete : 0=0.0%, 4=91.8%, 8=2.8%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:32.703 issued rwts: total=1921,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:32.703 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:32.703 filename1: (groupid=0, jobs=1): err= 0: pid=116898: Sun Jul 14 20:30:19 2024 00:30:32.703 read: IOPS=192, BW=771KiB/s (790kB/s)(7720KiB/10013msec) 00:30:32.703 slat (usec): min=3, max=11019, avg=21.94, stdev=310.46 00:30:32.703 clat (msec): min=13, max=144, avg=82.81, stdev=21.91 00:30:32.703 lat (msec): min=13, max=144, avg=82.83, stdev=21.92 00:30:32.703 clat percentiles (msec): 00:30:32.703 | 1.00th=[ 35], 5.00th=[ 48], 10.00th=[ 59], 20.00th=[ 63], 00:30:32.703 | 30.00th=[ 70], 40.00th=[ 73], 50.00th=[ 83], 60.00th=[ 88], 00:30:32.703 | 70.00th=[ 95], 80.00th=[ 102], 90.00th=[ 109], 95.00th=[ 120], 00:30:32.703 | 99.00th=[ 138], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 144], 00:30:32.703 | 99.99th=[ 144] 00:30:32.703 bw ( KiB/s): min= 512, max= 976, per=3.69%, avg=762.21, stdev=106.64, samples=19 00:30:32.703 iops : min= 128, max= 244, avg=190.53, stdev=26.67, samples=19 00:30:32.703 lat (msec) : 20=0.31%, 50=5.85%, 100=72.23%, 250=21.61% 00:30:32.703 cpu : usr=34.25%, sys=0.63%, ctx=932, majf=0, minf=9 00:30:32.703 IO depths : 1=1.8%, 2=4.2%, 4=14.3%, 8=68.7%, 16=11.1%, 32=0.0%, >=64=0.0% 00:30:32.703 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:32.703 complete : 0=0.0%, 4=90.8%, 8=3.9%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:32.703 issued rwts: total=1930,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:32.703 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:32.703 filename1: (groupid=0, jobs=1): err= 0: pid=116899: Sun Jul 14 20:30:19 2024 00:30:32.703 read: IOPS=199, BW=798KiB/s (817kB/s)(7984KiB/10006msec) 00:30:32.703 slat (usec): min=4, max=11025, avg=22.58, stdev=304.83 00:30:32.703 clat (msec): min=10, max=162, avg=80.02, stdev=25.04 00:30:32.703 lat (msec): min=10, max=162, avg=80.05, stdev=25.04 00:30:32.703 clat percentiles (msec): 00:30:32.703 | 1.00th=[ 33], 5.00th=[ 47], 10.00th=[ 54], 20.00th=[ 62], 00:30:32.703 | 30.00th=[ 66], 40.00th=[ 70], 50.00th=[ 73], 60.00th=[ 85], 00:30:32.703 | 70.00th=[ 92], 80.00th=[ 104], 90.00th=[ 116], 95.00th=[ 124], 00:30:32.703 | 99.00th=[ 144], 99.50th=[ 159], 99.90th=[ 163], 99.95th=[ 163], 00:30:32.703 | 99.99th=[ 163] 00:30:32.703 bw ( KiB/s): min= 600, max= 1072, per=3.81%, avg=786.53, stdev=146.11, samples=19 00:30:32.703 iops : min= 150, max= 268, avg=196.63, stdev=36.53, samples=19 00:30:32.703 lat (msec) : 20=0.80%, 50=7.97%, 100=69.89%, 250=21.34% 00:30:32.703 cpu : usr=32.56%, sys=0.68%, ctx=869, majf=0, minf=9 00:30:32.703 IO depths : 1=2.0%, 2=4.4%, 4=13.0%, 8=69.1%, 16=11.5%, 32=0.0%, >=64=0.0% 00:30:32.703 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:32.703 complete : 0=0.0%, 4=91.0%, 8=4.2%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:32.703 issued rwts: total=1996,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:32.703 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:32.703 filename1: (groupid=0, jobs=1): err= 0: pid=116900: Sun Jul 14 20:30:19 2024 00:30:32.703 read: IOPS=247, BW=991KiB/s (1015kB/s)(9956KiB/10043msec) 00:30:32.703 slat (usec): min=5, max=3994, avg=14.44, stdev=107.84 00:30:32.703 clat (msec): min=11, max=158, avg=64.32, stdev=20.80 00:30:32.703 lat (msec): min=11, max=158, avg=64.33, stdev=20.81 00:30:32.703 clat percentiles (msec): 00:30:32.703 | 1.00th=[ 15], 5.00th=[ 38], 10.00th=[ 42], 20.00th=[ 47], 00:30:32.703 | 30.00th=[ 53], 40.00th=[ 59], 50.00th=[ 63], 60.00th=[ 67], 00:30:32.703 | 70.00th=[ 72], 80.00th=[ 81], 90.00th=[ 89], 95.00th=[ 101], 00:30:32.703 | 99.00th=[ 131], 99.50th=[ 136], 99.90th=[ 159], 99.95th=[ 159], 00:30:32.703 | 99.99th=[ 159] 00:30:32.703 bw ( KiB/s): min= 720, max= 1456, per=4.80%, avg=992.85, stdev=191.97, samples=20 00:30:32.703 iops : min= 180, max= 364, avg=248.20, stdev=47.99, samples=20 00:30:32.703 lat (msec) : 20=1.29%, 50=25.75%, 100=67.86%, 250=5.10% 00:30:32.703 cpu : usr=44.87%, sys=0.82%, ctx=1273, majf=0, minf=9 00:30:32.703 IO depths : 1=1.0%, 2=2.2%, 4=8.9%, 8=75.6%, 16=12.2%, 32=0.0%, >=64=0.0% 00:30:32.703 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:32.703 complete : 0=0.0%, 4=89.7%, 8=5.5%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:32.703 issued rwts: total=2489,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:32.703 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:32.703 filename2: (groupid=0, jobs=1): err= 0: pid=116901: Sun Jul 14 20:30:19 2024 00:30:32.703 read: IOPS=209, BW=837KiB/s (857kB/s)(8400KiB/10033msec) 00:30:32.703 slat (usec): min=4, max=7023, avg=22.70, stdev=216.16 00:30:32.703 clat (msec): min=31, max=139, avg=76.13, stdev=23.04 00:30:32.703 lat (msec): min=31, max=140, avg=76.15, stdev=23.04 00:30:32.703 clat percentiles (msec): 00:30:32.703 | 1.00th=[ 37], 5.00th=[ 43], 10.00th=[ 47], 20.00th=[ 57], 00:30:32.703 | 30.00th=[ 62], 40.00th=[ 67], 50.00th=[ 72], 60.00th=[ 82], 00:30:32.703 | 70.00th=[ 90], 80.00th=[ 97], 90.00th=[ 108], 95.00th=[ 118], 00:30:32.703 | 99.00th=[ 130], 99.50th=[ 140], 99.90th=[ 140], 99.95th=[ 140], 00:30:32.703 | 99.99th=[ 140] 00:30:32.703 bw ( KiB/s): min= 640, max= 1128, per=4.03%, avg=833.65, stdev=162.08, samples=20 00:30:32.703 iops : min= 160, max= 282, avg=208.40, stdev=40.51, samples=20 00:30:32.703 lat (msec) : 50=14.14%, 100=71.48%, 250=14.38% 00:30:32.703 cpu : usr=42.30%, sys=0.62%, ctx=1505, majf=0, minf=9 00:30:32.703 IO depths : 1=2.9%, 2=6.3%, 4=15.4%, 8=65.4%, 16=10.0%, 32=0.0%, >=64=0.0% 00:30:32.703 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:32.703 complete : 0=0.0%, 4=91.6%, 8=3.2%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:32.703 issued rwts: total=2100,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:32.703 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:32.703 filename2: (groupid=0, jobs=1): err= 0: pid=116902: Sun Jul 14 20:30:19 2024 00:30:32.703 read: IOPS=240, BW=963KiB/s (986kB/s)(9664KiB/10038msec) 00:30:32.703 slat (usec): min=5, max=5017, avg=19.18, stdev=168.12 00:30:32.703 clat (msec): min=25, max=159, avg=66.28, stdev=24.49 00:30:32.703 lat (msec): min=25, max=159, avg=66.30, stdev=24.49 00:30:32.703 clat percentiles (msec): 00:30:32.703 | 1.00th=[ 31], 5.00th=[ 36], 10.00th=[ 40], 20.00th=[ 46], 00:30:32.704 | 30.00th=[ 51], 40.00th=[ 58], 50.00th=[ 62], 60.00th=[ 68], 00:30:32.704 | 70.00th=[ 73], 80.00th=[ 85], 90.00th=[ 103], 95.00th=[ 114], 00:30:32.704 | 99.00th=[ 136], 99.50th=[ 144], 99.90th=[ 161], 99.95th=[ 161], 00:30:32.704 | 99.99th=[ 161] 00:30:32.704 bw ( KiB/s): min= 560, max= 1328, per=4.65%, avg=960.00, stdev=240.42, samples=20 00:30:32.704 iops : min= 140, max= 332, avg=240.00, stdev=60.11, samples=20 00:30:32.704 lat (msec) : 50=29.76%, 100=60.02%, 250=10.22% 00:30:32.704 cpu : usr=42.16%, sys=0.81%, ctx=1615, majf=0, minf=9 00:30:32.704 IO depths : 1=0.7%, 2=1.6%, 4=7.5%, 8=77.0%, 16=13.1%, 32=0.0%, >=64=0.0% 00:30:32.704 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:32.704 complete : 0=0.0%, 4=89.4%, 8=6.4%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:32.704 issued rwts: total=2416,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:32.704 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:32.704 filename2: (groupid=0, jobs=1): err= 0: pid=116903: Sun Jul 14 20:30:19 2024 00:30:32.704 read: IOPS=214, BW=858KiB/s (879kB/s)(8628KiB/10052msec) 00:30:32.704 slat (nsec): min=6561, max=52505, avg=12235.06, stdev=7898.27 00:30:32.704 clat (msec): min=10, max=161, avg=74.26, stdev=24.41 00:30:32.704 lat (msec): min=10, max=161, avg=74.27, stdev=24.41 00:30:32.704 clat percentiles (msec): 00:30:32.704 | 1.00th=[ 18], 5.00th=[ 37], 10.00th=[ 46], 20.00th=[ 55], 00:30:32.704 | 30.00th=[ 60], 40.00th=[ 66], 50.00th=[ 72], 60.00th=[ 79], 00:30:32.704 | 70.00th=[ 89], 80.00th=[ 95], 90.00th=[ 106], 95.00th=[ 121], 00:30:32.704 | 99.00th=[ 138], 99.50th=[ 144], 99.90th=[ 161], 99.95th=[ 161], 00:30:32.704 | 99.99th=[ 161] 00:30:32.704 bw ( KiB/s): min= 560, max= 1208, per=4.16%, avg=859.70, stdev=180.33, samples=20 00:30:32.704 iops : min= 140, max= 302, avg=214.90, stdev=45.08, samples=20 00:30:32.704 lat (msec) : 20=1.48%, 50=15.16%, 100=68.71%, 250=14.65% 00:30:32.704 cpu : usr=32.74%, sys=0.50%, ctx=868, majf=0, minf=9 00:30:32.704 IO depths : 1=1.3%, 2=2.6%, 4=10.2%, 8=73.6%, 16=12.3%, 32=0.0%, >=64=0.0% 00:30:32.704 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:32.704 complete : 0=0.0%, 4=90.0%, 8=5.4%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:32.704 issued rwts: total=2157,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:32.704 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:32.704 filename2: (groupid=0, jobs=1): err= 0: pid=116904: Sun Jul 14 20:30:19 2024 00:30:32.704 read: IOPS=203, BW=814KiB/s (834kB/s)(8156KiB/10014msec) 00:30:32.704 slat (usec): min=5, max=8065, avg=22.56, stdev=266.70 00:30:32.704 clat (msec): min=15, max=173, avg=78.40, stdev=23.49 00:30:32.704 lat (msec): min=15, max=173, avg=78.43, stdev=23.49 00:30:32.704 clat percentiles (msec): 00:30:32.704 | 1.00th=[ 34], 5.00th=[ 46], 10.00th=[ 50], 20.00th=[ 61], 00:30:32.704 | 30.00th=[ 63], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 85], 00:30:32.704 | 70.00th=[ 90], 80.00th=[ 96], 90.00th=[ 107], 95.00th=[ 120], 00:30:32.704 | 99.00th=[ 157], 99.50th=[ 165], 99.90th=[ 174], 99.95th=[ 174], 00:30:32.704 | 99.99th=[ 174] 00:30:32.704 bw ( KiB/s): min= 640, max= 944, per=3.84%, avg=794.37, stdev=116.34, samples=19 00:30:32.704 iops : min= 160, max= 236, avg=198.58, stdev=29.10, samples=19 00:30:32.704 lat (msec) : 20=0.29%, 50=11.33%, 100=74.35%, 250=14.03% 00:30:32.704 cpu : usr=33.11%, sys=0.59%, ctx=875, majf=0, minf=9 00:30:32.704 IO depths : 1=1.2%, 2=2.6%, 4=10.1%, 8=73.5%, 16=12.6%, 32=0.0%, >=64=0.0% 00:30:32.704 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:32.704 complete : 0=0.0%, 4=89.9%, 8=5.7%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:32.704 issued rwts: total=2039,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:32.704 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:32.704 filename2: (groupid=0, jobs=1): err= 0: pid=116905: Sun Jul 14 20:30:19 2024 00:30:32.704 read: IOPS=248, BW=995KiB/s (1019kB/s)(9992KiB/10040msec) 00:30:32.704 slat (usec): min=4, max=4016, avg=15.90, stdev=131.18 00:30:32.704 clat (msec): min=6, max=136, avg=64.08, stdev=22.11 00:30:32.704 lat (msec): min=6, max=136, avg=64.09, stdev=22.11 00:30:32.704 clat percentiles (msec): 00:30:32.704 | 1.00th=[ 17], 5.00th=[ 36], 10.00th=[ 40], 20.00th=[ 45], 00:30:32.704 | 30.00th=[ 50], 40.00th=[ 57], 50.00th=[ 62], 60.00th=[ 68], 00:30:32.704 | 70.00th=[ 73], 80.00th=[ 82], 90.00th=[ 96], 95.00th=[ 104], 00:30:32.704 | 99.00th=[ 120], 99.50th=[ 125], 99.90th=[ 138], 99.95th=[ 138], 00:30:32.704 | 99.99th=[ 138] 00:30:32.704 bw ( KiB/s): min= 704, max= 1408, per=4.83%, avg=997.20, stdev=200.74, samples=20 00:30:32.704 iops : min= 176, max= 352, avg=249.30, stdev=50.18, samples=20 00:30:32.704 lat (msec) : 10=0.64%, 20=0.64%, 50=29.38%, 100=62.09%, 250=7.25% 00:30:32.704 cpu : usr=40.41%, sys=0.64%, ctx=1385, majf=0, minf=9 00:30:32.704 IO depths : 1=0.4%, 2=1.0%, 4=5.9%, 8=79.3%, 16=13.4%, 32=0.0%, >=64=0.0% 00:30:32.704 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:32.704 complete : 0=0.0%, 4=89.1%, 8=6.7%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:32.704 issued rwts: total=2498,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:32.704 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:32.704 filename2: (groupid=0, jobs=1): err= 0: pid=116906: Sun Jul 14 20:30:19 2024 00:30:32.704 read: IOPS=219, BW=879KiB/s (900kB/s)(8820KiB/10033msec) 00:30:32.704 slat (usec): min=6, max=4030, avg=19.34, stdev=151.34 00:30:32.704 clat (msec): min=29, max=143, avg=72.66, stdev=22.34 00:30:32.704 lat (msec): min=29, max=143, avg=72.68, stdev=22.34 00:30:32.704 clat percentiles (msec): 00:30:32.704 | 1.00th=[ 35], 5.00th=[ 40], 10.00th=[ 45], 20.00th=[ 55], 00:30:32.704 | 30.00th=[ 62], 40.00th=[ 65], 50.00th=[ 69], 60.00th=[ 77], 00:30:32.704 | 70.00th=[ 83], 80.00th=[ 90], 90.00th=[ 101], 95.00th=[ 116], 00:30:32.704 | 99.00th=[ 136], 99.50th=[ 140], 99.90th=[ 144], 99.95th=[ 144], 00:30:32.704 | 99.99th=[ 144] 00:30:32.704 bw ( KiB/s): min= 512, max= 1200, per=4.24%, avg=875.25, stdev=170.29, samples=20 00:30:32.704 iops : min= 128, max= 300, avg=218.80, stdev=42.58, samples=20 00:30:32.704 lat (msec) : 50=15.06%, 100=74.97%, 250=9.98% 00:30:32.704 cpu : usr=43.46%, sys=0.86%, ctx=1375, majf=0, minf=9 00:30:32.704 IO depths : 1=2.4%, 2=5.1%, 4=13.3%, 8=68.6%, 16=10.5%, 32=0.0%, >=64=0.0% 00:30:32.704 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:32.704 complete : 0=0.0%, 4=91.1%, 8=3.7%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:32.704 issued rwts: total=2205,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:32.704 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:32.704 filename2: (groupid=0, jobs=1): err= 0: pid=116907: Sun Jul 14 20:30:19 2024 00:30:32.704 read: IOPS=197, BW=789KiB/s (808kB/s)(7896KiB/10005msec) 00:30:32.704 slat (usec): min=5, max=8048, avg=24.33, stdev=270.94 00:30:32.704 clat (msec): min=8, max=158, avg=80.89, stdev=22.75 00:30:32.704 lat (msec): min=8, max=158, avg=80.91, stdev=22.76 00:30:32.704 clat percentiles (msec): 00:30:32.704 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 57], 20.00th=[ 61], 00:30:32.704 | 30.00th=[ 68], 40.00th=[ 72], 50.00th=[ 78], 60.00th=[ 86], 00:30:32.704 | 70.00th=[ 94], 80.00th=[ 99], 90.00th=[ 109], 95.00th=[ 123], 00:30:32.704 | 99.00th=[ 140], 99.50th=[ 153], 99.90th=[ 159], 99.95th=[ 159], 00:30:32.704 | 99.99th=[ 159] 00:30:32.704 bw ( KiB/s): min= 560, max= 1024, per=3.75%, avg=775.74, stdev=130.93, samples=19 00:30:32.704 iops : min= 140, max= 256, avg=193.89, stdev=32.74, samples=19 00:30:32.704 lat (msec) : 10=0.30%, 20=0.30%, 50=5.57%, 100=75.73%, 250=18.09% 00:30:32.704 cpu : usr=35.86%, sys=0.67%, ctx=1046, majf=0, minf=9 00:30:32.704 IO depths : 1=2.6%, 2=5.9%, 4=16.4%, 8=64.9%, 16=10.2%, 32=0.0%, >=64=0.0% 00:30:32.704 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:32.704 complete : 0=0.0%, 4=91.5%, 8=3.1%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:32.704 issued rwts: total=1974,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:32.704 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:32.704 filename2: (groupid=0, jobs=1): err= 0: pid=116908: Sun Jul 14 20:30:19 2024 00:30:32.704 read: IOPS=193, BW=772KiB/s (791kB/s)(7724KiB/10002msec) 00:30:32.704 slat (usec): min=3, max=6725, avg=18.16, stdev=167.88 00:30:32.704 clat (msec): min=5, max=173, avg=82.72, stdev=25.24 00:30:32.704 lat (msec): min=5, max=173, avg=82.74, stdev=25.25 00:30:32.704 clat percentiles (msec): 00:30:32.704 | 1.00th=[ 15], 5.00th=[ 45], 10.00th=[ 55], 20.00th=[ 63], 00:30:32.704 | 30.00th=[ 69], 40.00th=[ 73], 50.00th=[ 84], 60.00th=[ 88], 00:30:32.704 | 70.00th=[ 95], 80.00th=[ 103], 90.00th=[ 114], 95.00th=[ 130], 00:30:32.704 | 99.00th=[ 146], 99.50th=[ 161], 99.90th=[ 174], 99.95th=[ 174], 00:30:32.704 | 99.99th=[ 174] 00:30:32.704 bw ( KiB/s): min= 512, max= 1048, per=3.65%, avg=754.11, stdev=140.90, samples=19 00:30:32.704 iops : min= 128, max= 262, avg=188.53, stdev=35.23, samples=19 00:30:32.704 lat (msec) : 10=0.31%, 20=0.83%, 50=6.27%, 100=71.36%, 250=21.23% 00:30:32.704 cpu : usr=41.32%, sys=0.56%, ctx=1173, majf=0, minf=9 00:30:32.704 IO depths : 1=1.9%, 2=4.5%, 4=13.5%, 8=68.3%, 16=11.9%, 32=0.0%, >=64=0.0% 00:30:32.704 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:32.704 complete : 0=0.0%, 4=91.3%, 8=4.3%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:32.704 issued rwts: total=1931,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:32.704 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:32.704 00:30:32.704 Run status group 0 (all jobs): 00:30:32.704 READ: bw=20.2MiB/s (21.1MB/s), 749KiB/s-1040KiB/s (767kB/s-1065kB/s), io=203MiB (213MB), run=10002-10071msec 00:30:32.704 20:30:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:30:32.704 20:30:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:30:32.704 20:30:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:32.704 20:30:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:32.704 20:30:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:30:32.704 20:30:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:32.704 20:30:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:32.704 20:30:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:32.704 20:30:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:32.704 20:30:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:32.704 20:30:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:32.704 20:30:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:32.704 20:30:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:32.704 20:30:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:32.704 20:30:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:32.704 20:30:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:30:32.704 20:30:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:32.704 20:30:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:32.704 20:30:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:32.704 20:30:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:32.704 20:30:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:32.705 20:30:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:32.705 20:30:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:32.705 20:30:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:32.705 20:30:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:32.705 20:30:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:30:32.705 20:30:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:30:32.705 20:30:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:32.705 20:30:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:32.705 20:30:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:32.705 20:30:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:32.705 20:30:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:30:32.705 20:30:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:32.705 20:30:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:32.705 20:30:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:32.705 20:30:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:30:32.705 20:30:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:30:32.705 20:30:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:30:32.705 20:30:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:30:32.705 20:30:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:30:32.705 20:30:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:30:32.705 20:30:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:30:32.705 20:30:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:30:32.705 20:30:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:32.705 20:30:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:30:32.705 20:30:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:30:32.705 20:30:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:32.705 20:30:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:32.705 20:30:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:32.705 bdev_null0 00:30:32.705 20:30:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:32.705 20:30:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:32.705 20:30:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:32.705 20:30:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:32.705 20:30:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:32.705 20:30:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:32.705 20:30:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:32.705 20:30:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:32.705 20:30:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:32.705 20:30:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:32.705 20:30:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:32.705 20:30:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:32.705 [2024-07-14 20:30:20.136739] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:32.705 20:30:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:32.705 20:30:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:32.705 20:30:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:30:32.705 20:30:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:30:32.705 20:30:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:30:32.705 20:30:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:32.705 20:30:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:32.705 bdev_null1 00:30:32.705 20:30:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:32.705 20:30:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:32.705 20:30:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:32.705 20:30:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:32.705 20:30:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:32.705 20:30:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:32.705 20:30:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:32.705 20:30:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:32.705 20:30:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:32.705 20:30:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:32.705 20:30:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:32.705 20:30:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:32.705 20:30:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:32.705 20:30:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:30:32.705 20:30:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:30:32.705 20:30:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:30:32.705 20:30:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:30:32.705 20:30:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:30:32.705 20:30:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:32.705 20:30:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:32.705 { 00:30:32.705 "params": { 00:30:32.705 "name": "Nvme$subsystem", 00:30:32.705 "trtype": "$TEST_TRANSPORT", 00:30:32.705 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:32.705 "adrfam": "ipv4", 00:30:32.705 "trsvcid": "$NVMF_PORT", 00:30:32.705 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:32.705 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:32.705 "hdgst": ${hdgst:-false}, 00:30:32.705 "ddgst": ${ddgst:-false} 00:30:32.705 }, 00:30:32.705 "method": "bdev_nvme_attach_controller" 00:30:32.705 } 00:30:32.705 EOF 00:30:32.705 )") 00:30:32.705 20:30:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:32.705 20:30:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:30:32.705 20:30:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:32.705 20:30:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:30:32.705 20:30:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:30:32.705 20:30:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:30:32.705 20:30:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:32.705 20:30:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:30:32.705 20:30:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:32.705 20:30:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:30:32.705 20:30:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:30:32.705 20:30:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:32.705 20:30:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:32.705 20:30:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:30:32.705 20:30:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:32.705 20:30:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:30:32.705 20:30:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:30:32.705 20:30:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:32.705 20:30:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:32.705 20:30:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:30:32.705 20:30:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:32.705 20:30:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:32.705 { 00:30:32.705 "params": { 00:30:32.705 "name": "Nvme$subsystem", 00:30:32.705 "trtype": "$TEST_TRANSPORT", 00:30:32.705 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:32.705 "adrfam": "ipv4", 00:30:32.705 "trsvcid": "$NVMF_PORT", 00:30:32.705 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:32.705 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:32.705 "hdgst": ${hdgst:-false}, 00:30:32.705 "ddgst": ${ddgst:-false} 00:30:32.705 }, 00:30:32.705 "method": "bdev_nvme_attach_controller" 00:30:32.705 } 00:30:32.705 EOF 00:30:32.705 )") 00:30:32.705 20:30:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:32.705 20:30:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:32.705 20:30:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:30:32.705 20:30:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:30:32.705 20:30:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:32.705 "params": { 00:30:32.705 "name": "Nvme0", 00:30:32.705 "trtype": "tcp", 00:30:32.705 "traddr": "10.0.0.2", 00:30:32.705 "adrfam": "ipv4", 00:30:32.705 "trsvcid": "4420", 00:30:32.705 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:32.705 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:32.705 "hdgst": false, 00:30:32.705 "ddgst": false 00:30:32.705 }, 00:30:32.705 "method": "bdev_nvme_attach_controller" 00:30:32.705 },{ 00:30:32.705 "params": { 00:30:32.705 "name": "Nvme1", 00:30:32.705 "trtype": "tcp", 00:30:32.705 "traddr": "10.0.0.2", 00:30:32.705 "adrfam": "ipv4", 00:30:32.705 "trsvcid": "4420", 00:30:32.705 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:32.705 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:32.705 "hdgst": false, 00:30:32.705 "ddgst": false 00:30:32.705 }, 00:30:32.705 "method": "bdev_nvme_attach_controller" 00:30:32.705 }' 00:30:32.705 20:30:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:32.705 20:30:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:32.706 20:30:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:32.706 20:30:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:32.706 20:30:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:30:32.706 20:30:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:32.706 20:30:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:32.706 20:30:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:32.706 20:30:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:30:32.706 20:30:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:32.706 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:30:32.706 ... 00:30:32.706 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:30:32.706 ... 00:30:32.706 fio-3.35 00:30:32.706 Starting 4 threads 00:30:37.971 00:30:37.971 filename0: (groupid=0, jobs=1): err= 0: pid=117035: Sun Jul 14 20:30:26 2024 00:30:37.971 read: IOPS=2239, BW=17.5MiB/s (18.3MB/s)(87.5MiB/5003msec) 00:30:37.971 slat (nsec): min=6209, max=88540, avg=13317.37, stdev=9910.06 00:30:37.971 clat (usec): min=1140, max=4990, avg=3523.37, stdev=176.19 00:30:37.971 lat (usec): min=1147, max=5009, avg=3536.68, stdev=175.23 00:30:37.971 clat percentiles (usec): 00:30:37.971 | 1.00th=[ 3097], 5.00th=[ 3359], 10.00th=[ 3392], 20.00th=[ 3425], 00:30:37.971 | 30.00th=[ 3458], 40.00th=[ 3490], 50.00th=[ 3490], 60.00th=[ 3523], 00:30:37.971 | 70.00th=[ 3556], 80.00th=[ 3589], 90.00th=[ 3687], 95.00th=[ 3818], 00:30:37.971 | 99.00th=[ 4080], 99.50th=[ 4228], 99.90th=[ 4686], 99.95th=[ 4752], 00:30:37.971 | 99.99th=[ 5014] 00:30:37.971 bw ( KiB/s): min=17616, max=18176, per=25.01%, avg=17920.00, stdev=194.81, samples=9 00:30:37.971 iops : min= 2202, max= 2272, avg=2240.00, stdev=24.35, samples=9 00:30:37.971 lat (msec) : 2=0.05%, 4=98.53%, 10=1.42% 00:30:37.971 cpu : usr=95.58%, sys=3.22%, ctx=6, majf=0, minf=9 00:30:37.971 IO depths : 1=2.6%, 2=5.3%, 4=69.7%, 8=22.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:37.971 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:37.971 complete : 0=0.0%, 4=89.8%, 8=10.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:37.971 issued rwts: total=11203,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:37.971 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:37.971 filename0: (groupid=0, jobs=1): err= 0: pid=117036: Sun Jul 14 20:30:26 2024 00:30:37.971 read: IOPS=2237, BW=17.5MiB/s (18.3MB/s)(87.4MiB/5002msec) 00:30:37.971 slat (nsec): min=4810, max=90604, avg=18718.78, stdev=11451.51 00:30:37.971 clat (usec): min=2055, max=5591, avg=3472.75, stdev=167.41 00:30:37.971 lat (usec): min=2066, max=5608, avg=3491.47, stdev=168.17 00:30:37.971 clat percentiles (usec): 00:30:37.971 | 1.00th=[ 3228], 5.00th=[ 3294], 10.00th=[ 3326], 20.00th=[ 3392], 00:30:37.971 | 30.00th=[ 3392], 40.00th=[ 3425], 50.00th=[ 3458], 60.00th=[ 3490], 00:30:37.971 | 70.00th=[ 3490], 80.00th=[ 3556], 90.00th=[ 3654], 95.00th=[ 3720], 00:30:37.971 | 99.00th=[ 3949], 99.50th=[ 4113], 99.90th=[ 5276], 99.95th=[ 5538], 00:30:37.971 | 99.99th=[ 5604] 00:30:37.971 bw ( KiB/s): min=17536, max=18288, per=24.99%, avg=17905.78, stdev=246.50, samples=9 00:30:37.971 iops : min= 2192, max= 2286, avg=2238.22, stdev=30.81, samples=9 00:30:37.971 lat (msec) : 4=99.20%, 10=0.80% 00:30:37.971 cpu : usr=94.54%, sys=4.10%, ctx=5, majf=0, minf=10 00:30:37.971 IO depths : 1=11.9%, 2=25.0%, 4=50.0%, 8=13.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:37.971 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:37.971 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:37.971 issued rwts: total=11192,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:37.971 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:37.971 filename1: (groupid=0, jobs=1): err= 0: pid=117037: Sun Jul 14 20:30:26 2024 00:30:37.971 read: IOPS=2243, BW=17.5MiB/s (18.4MB/s)(87.7MiB/5002msec) 00:30:37.971 slat (usec): min=6, max=100, avg= 9.61, stdev= 6.66 00:30:37.971 clat (usec): min=944, max=4523, avg=3513.49, stdev=188.51 00:30:37.971 lat (usec): min=951, max=4530, avg=3523.11, stdev=188.04 00:30:37.971 clat percentiles (usec): 00:30:37.971 | 1.00th=[ 3163], 5.00th=[ 3392], 10.00th=[ 3392], 20.00th=[ 3425], 00:30:37.971 | 30.00th=[ 3458], 40.00th=[ 3458], 50.00th=[ 3490], 60.00th=[ 3523], 00:30:37.971 | 70.00th=[ 3556], 80.00th=[ 3589], 90.00th=[ 3687], 95.00th=[ 3785], 00:30:37.971 | 99.00th=[ 3982], 99.50th=[ 4080], 99.90th=[ 4228], 99.95th=[ 4424], 00:30:37.971 | 99.99th=[ 4490] 00:30:37.971 bw ( KiB/s): min=17536, max=18304, per=25.07%, avg=17962.67, stdev=263.88, samples=9 00:30:37.972 iops : min= 2192, max= 2288, avg=2245.33, stdev=32.98, samples=9 00:30:37.972 lat (usec) : 1000=0.03% 00:30:37.972 lat (msec) : 2=0.29%, 4=98.90%, 10=0.78% 00:30:37.972 cpu : usr=94.22%, sys=4.32%, ctx=18, majf=0, minf=0 00:30:37.972 IO depths : 1=11.5%, 2=24.8%, 4=50.2%, 8=13.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:37.972 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:37.972 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:37.972 issued rwts: total=11224,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:37.972 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:37.972 filename1: (groupid=0, jobs=1): err= 0: pid=117038: Sun Jul 14 20:30:26 2024 00:30:37.972 read: IOPS=2237, BW=17.5MiB/s (18.3MB/s)(87.4MiB/5002msec) 00:30:37.972 slat (usec): min=6, max=187, avg=17.42, stdev=10.42 00:30:37.972 clat (usec): min=2642, max=6315, avg=3500.28, stdev=156.69 00:30:37.972 lat (usec): min=2662, max=6340, avg=3517.70, stdev=154.63 00:30:37.972 clat percentiles (usec): 00:30:37.972 | 1.00th=[ 3228], 5.00th=[ 3326], 10.00th=[ 3359], 20.00th=[ 3392], 00:30:37.972 | 30.00th=[ 3425], 40.00th=[ 3458], 50.00th=[ 3490], 60.00th=[ 3490], 00:30:37.972 | 70.00th=[ 3523], 80.00th=[ 3589], 90.00th=[ 3654], 95.00th=[ 3785], 00:30:37.972 | 99.00th=[ 3949], 99.50th=[ 4080], 99.90th=[ 4359], 99.95th=[ 6259], 00:30:37.972 | 99.99th=[ 6325] 00:30:37.972 bw ( KiB/s): min=17664, max=18176, per=24.99%, avg=17909.67, stdev=183.65, samples=9 00:30:37.972 iops : min= 2208, max= 2272, avg=2238.67, stdev=22.98, samples=9 00:30:37.972 lat (msec) : 4=99.24%, 10=0.76% 00:30:37.972 cpu : usr=95.92%, sys=2.90%, ctx=35, majf=0, minf=9 00:30:37.972 IO depths : 1=12.4%, 2=25.0%, 4=50.0%, 8=12.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:37.972 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:37.972 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:37.972 issued rwts: total=11192,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:37.972 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:37.972 00:30:37.972 Run status group 0 (all jobs): 00:30:37.972 READ: bw=70.0MiB/s (73.4MB/s), 17.5MiB/s-17.5MiB/s (18.3MB/s-18.4MB/s), io=350MiB (367MB), run=5002-5003msec 00:30:37.972 20:30:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:30:37.972 20:30:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:30:37.972 20:30:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:37.972 20:30:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:37.972 20:30:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:30:37.972 20:30:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:37.972 20:30:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:37.972 20:30:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:37.972 20:30:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:37.972 20:30:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:37.972 20:30:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:37.972 20:30:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:37.972 20:30:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:37.972 20:30:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:37.972 20:30:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:37.972 20:30:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:30:37.972 20:30:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:37.972 20:30:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:37.972 20:30:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:37.972 20:30:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:37.972 20:30:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:37.972 20:30:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:37.972 20:30:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:37.972 ************************************ 00:30:37.972 END TEST fio_dif_rand_params 00:30:37.972 ************************************ 00:30:37.972 20:30:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:37.972 00:30:37.972 real 0m23.735s 00:30:37.972 user 2m7.819s 00:30:37.972 sys 0m3.998s 00:30:37.972 20:30:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:37.972 20:30:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:37.972 20:30:26 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:30:37.972 20:30:26 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:30:37.972 20:30:26 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:37.972 20:30:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:37.972 ************************************ 00:30:37.972 START TEST fio_dif_digest 00:30:37.972 ************************************ 00:30:37.972 20:30:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1121 -- # fio_dif_digest 00:30:37.972 20:30:26 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:30:37.972 20:30:26 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:30:37.972 20:30:26 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:30:37.972 20:30:26 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:30:37.972 20:30:26 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:30:37.972 20:30:26 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:30:37.972 20:30:26 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:30:37.972 20:30:26 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:30:37.972 20:30:26 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:30:37.972 20:30:26 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:30:37.972 20:30:26 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:30:37.972 20:30:26 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:30:37.972 20:30:26 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:30:37.972 20:30:26 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:30:37.972 20:30:26 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:30:37.972 20:30:26 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:30:37.972 20:30:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:37.972 20:30:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:37.972 bdev_null0 00:30:37.972 20:30:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:37.972 20:30:26 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:37.972 20:30:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:37.972 20:30:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:37.972 20:30:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:37.972 20:30:26 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:37.972 20:30:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:37.972 20:30:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:37.972 20:30:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:37.972 20:30:26 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:37.972 20:30:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:37.972 20:30:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:37.972 [2024-07-14 20:30:26.358793] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:37.972 20:30:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:37.972 20:30:26 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:30:37.972 20:30:26 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:30:37.972 20:30:26 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:37.972 20:30:26 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:30:37.972 20:30:26 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:30:37.972 20:30:26 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:37.972 20:30:26 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:37.973 20:30:26 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:37.973 { 00:30:37.973 "params": { 00:30:37.973 "name": "Nvme$subsystem", 00:30:37.973 "trtype": "$TEST_TRANSPORT", 00:30:37.973 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:37.973 "adrfam": "ipv4", 00:30:37.973 "trsvcid": "$NVMF_PORT", 00:30:37.973 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:37.973 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:37.973 "hdgst": ${hdgst:-false}, 00:30:37.973 "ddgst": ${ddgst:-false} 00:30:37.973 }, 00:30:37.973 "method": "bdev_nvme_attach_controller" 00:30:37.973 } 00:30:37.973 EOF 00:30:37.973 )") 00:30:37.973 20:30:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:37.973 20:30:26 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:30:37.973 20:30:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:30:37.973 20:30:26 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:30:37.973 20:30:26 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:30:37.973 20:30:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:37.973 20:30:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # local sanitizers 00:30:37.973 20:30:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:37.973 20:30:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # shift 00:30:37.973 20:30:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local asan_lib= 00:30:37.973 20:30:26 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:30:37.973 20:30:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:37.973 20:30:26 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:30:37.973 20:30:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:37.973 20:30:26 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:30:37.973 20:30:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libasan 00:30:37.973 20:30:26 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:30:37.973 20:30:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:37.973 20:30:26 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:30:37.973 20:30:26 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:37.973 "params": { 00:30:37.973 "name": "Nvme0", 00:30:37.973 "trtype": "tcp", 00:30:37.973 "traddr": "10.0.0.2", 00:30:37.973 "adrfam": "ipv4", 00:30:37.973 "trsvcid": "4420", 00:30:37.973 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:37.973 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:37.973 "hdgst": true, 00:30:37.973 "ddgst": true 00:30:37.973 }, 00:30:37.973 "method": "bdev_nvme_attach_controller" 00:30:37.973 }' 00:30:37.973 20:30:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:37.973 20:30:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:37.973 20:30:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:37.973 20:30:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:37.973 20:30:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:30:37.973 20:30:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:37.973 20:30:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:37.973 20:30:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:37.973 20:30:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:30:37.973 20:30:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:37.973 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:30:37.973 ... 00:30:37.973 fio-3.35 00:30:37.973 Starting 3 threads 00:30:50.177 00:30:50.177 filename0: (groupid=0, jobs=1): err= 0: pid=117140: Sun Jul 14 20:30:37 2024 00:30:50.177 read: IOPS=205, BW=25.7MiB/s (26.9MB/s)(257MiB/10005msec) 00:30:50.177 slat (nsec): min=6668, max=69034, avg=15813.40, stdev=6263.78 00:30:50.177 clat (usec): min=5586, max=19330, avg=14571.43, stdev=2013.87 00:30:50.177 lat (usec): min=5597, max=19342, avg=14587.24, stdev=2014.70 00:30:50.177 clat percentiles (usec): 00:30:50.177 | 1.00th=[ 8848], 5.00th=[ 9634], 10.00th=[10552], 20.00th=[14091], 00:30:50.177 | 30.00th=[14615], 40.00th=[14877], 50.00th=[15139], 60.00th=[15401], 00:30:50.177 | 70.00th=[15664], 80.00th=[15926], 90.00th=[16319], 95.00th=[16581], 00:30:50.177 | 99.00th=[17171], 99.50th=[17433], 99.90th=[18220], 99.95th=[18744], 00:30:50.177 | 99.99th=[19268] 00:30:50.177 bw ( KiB/s): min=24320, max=27904, per=29.40%, avg=26222.53, stdev=1170.81, samples=19 00:30:50.177 iops : min= 190, max= 218, avg=204.84, stdev= 9.15, samples=19 00:30:50.177 lat (msec) : 10=7.54%, 20=92.46% 00:30:50.177 cpu : usr=94.24%, sys=4.32%, ctx=19, majf=0, minf=9 00:30:50.177 IO depths : 1=2.1%, 2=97.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:50.177 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:50.177 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:50.177 issued rwts: total=2057,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:50.177 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:50.177 filename0: (groupid=0, jobs=1): err= 0: pid=117141: Sun Jul 14 20:30:37 2024 00:30:50.177 read: IOPS=249, BW=31.1MiB/s (32.7MB/s)(312MiB/10004msec) 00:30:50.177 slat (nsec): min=6280, max=80168, avg=17217.96, stdev=6775.78 00:30:50.177 clat (usec): min=8165, max=53770, avg=12021.97, stdev=6749.00 00:30:50.177 lat (usec): min=8185, max=53790, avg=12039.19, stdev=6749.03 00:30:50.177 clat percentiles (usec): 00:30:50.177 | 1.00th=[ 8979], 5.00th=[ 9634], 10.00th=[ 9896], 20.00th=[10290], 00:30:50.177 | 30.00th=[10552], 40.00th=[10683], 50.00th=[10945], 60.00th=[11207], 00:30:50.177 | 70.00th=[11338], 80.00th=[11600], 90.00th=[11994], 95.00th=[12387], 00:30:50.177 | 99.00th=[52167], 99.50th=[52691], 99.90th=[53740], 99.95th=[53740], 00:30:50.177 | 99.99th=[53740] 00:30:50.177 bw ( KiB/s): min=27136, max=36096, per=35.89%, avg=32013.47, stdev=2807.55, samples=19 00:30:50.177 iops : min= 212, max= 282, avg=250.11, stdev=21.93, samples=19 00:30:50.177 lat (msec) : 10=13.84%, 20=83.39%, 50=0.04%, 100=2.73% 00:30:50.177 cpu : usr=93.60%, sys=4.80%, ctx=14, majf=0, minf=0 00:30:50.177 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:50.177 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:50.177 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:50.177 issued rwts: total=2492,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:50.177 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:50.177 filename0: (groupid=0, jobs=1): err= 0: pid=117142: Sun Jul 14 20:30:37 2024 00:30:50.177 read: IOPS=242, BW=30.3MiB/s (31.7MB/s)(303MiB/10004msec) 00:30:50.177 slat (nsec): min=6437, max=61512, avg=14978.64, stdev=6131.34 00:30:50.177 clat (usec): min=6348, max=17161, avg=12366.60, stdev=1991.07 00:30:50.177 lat (usec): min=6359, max=17170, avg=12381.58, stdev=1991.48 00:30:50.177 clat percentiles (usec): 00:30:50.177 | 1.00th=[ 7177], 5.00th=[ 7832], 10.00th=[ 8356], 20.00th=[11469], 00:30:50.177 | 30.00th=[12125], 40.00th=[12518], 50.00th=[12780], 60.00th=[13173], 00:30:50.177 | 70.00th=[13435], 80.00th=[13829], 90.00th=[14353], 95.00th=[14746], 00:30:50.177 | 99.00th=[15533], 99.50th=[15795], 99.90th=[17171], 99.95th=[17171], 00:30:50.177 | 99.99th=[17171] 00:30:50.177 bw ( KiB/s): min=27648, max=33280, per=34.59%, avg=30857.89, stdev=1611.35, samples=19 00:30:50.177 iops : min= 216, max= 260, avg=241.05, stdev=12.60, samples=19 00:30:50.177 lat (msec) : 10=13.91%, 20=86.09% 00:30:50.177 cpu : usr=93.73%, sys=4.60%, ctx=18, majf=0, minf=9 00:30:50.177 IO depths : 1=1.0%, 2=99.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:50.177 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:50.177 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:50.177 issued rwts: total=2423,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:50.177 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:50.177 00:30:50.177 Run status group 0 (all jobs): 00:30:50.177 READ: bw=87.1MiB/s (91.3MB/s), 25.7MiB/s-31.1MiB/s (26.9MB/s-32.7MB/s), io=872MiB (914MB), run=10004-10005msec 00:30:50.177 20:30:37 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:30:50.177 20:30:37 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:30:50.177 20:30:37 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:30:50.177 20:30:37 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:50.177 20:30:37 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:30:50.177 20:30:37 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:50.177 20:30:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:50.177 20:30:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:50.177 20:30:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:50.177 20:30:37 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:50.177 20:30:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:50.177 20:30:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:50.177 20:30:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:50.177 00:30:50.177 real 0m10.946s 00:30:50.177 user 0m28.780s 00:30:50.177 sys 0m1.637s 00:30:50.177 20:30:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:50.177 ************************************ 00:30:50.177 END TEST fio_dif_digest 00:30:50.177 ************************************ 00:30:50.177 20:30:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:50.177 20:30:37 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:30:50.177 20:30:37 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:30:50.177 20:30:37 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:50.177 20:30:37 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:30:50.178 20:30:37 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:50.178 20:30:37 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:30:50.178 20:30:37 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:50.178 20:30:37 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:50.178 rmmod nvme_tcp 00:30:50.178 rmmod nvme_fabrics 00:30:50.178 rmmod nvme_keyring 00:30:50.178 20:30:37 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:50.178 20:30:37 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:30:50.178 20:30:37 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:30:50.178 20:30:37 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 116388 ']' 00:30:50.178 20:30:37 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 116388 00:30:50.178 20:30:37 nvmf_dif -- common/autotest_common.sh@946 -- # '[' -z 116388 ']' 00:30:50.178 20:30:37 nvmf_dif -- common/autotest_common.sh@950 -- # kill -0 116388 00:30:50.178 20:30:37 nvmf_dif -- common/autotest_common.sh@951 -- # uname 00:30:50.178 20:30:37 nvmf_dif -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:50.178 20:30:37 nvmf_dif -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 116388 00:30:50.178 killing process with pid 116388 00:30:50.178 20:30:37 nvmf_dif -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:30:50.178 20:30:37 nvmf_dif -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:30:50.178 20:30:37 nvmf_dif -- common/autotest_common.sh@964 -- # echo 'killing process with pid 116388' 00:30:50.178 20:30:37 nvmf_dif -- common/autotest_common.sh@965 -- # kill 116388 00:30:50.178 20:30:37 nvmf_dif -- common/autotest_common.sh@970 -- # wait 116388 00:30:50.178 20:30:37 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:30:50.178 20:30:37 nvmf_dif -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:30:50.178 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:50.178 Waiting for block devices as requested 00:30:50.178 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:30:50.178 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:30:50.178 20:30:38 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:50.178 20:30:38 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:50.178 20:30:38 nvmf_dif -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:50.178 20:30:38 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:50.178 20:30:38 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:50.178 20:30:38 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:50.178 20:30:38 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:50.178 20:30:38 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:30:50.178 00:30:50.178 real 1m0.283s 00:30:50.178 user 3m52.886s 00:30:50.178 sys 0m13.923s 00:30:50.178 20:30:38 nvmf_dif -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:50.178 ************************************ 00:30:50.178 END TEST nvmf_dif 00:30:50.178 ************************************ 00:30:50.178 20:30:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:50.178 20:30:38 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:30:50.178 20:30:38 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:30:50.178 20:30:38 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:50.178 20:30:38 -- common/autotest_common.sh@10 -- # set +x 00:30:50.178 ************************************ 00:30:50.178 START TEST nvmf_abort_qd_sizes 00:30:50.178 ************************************ 00:30:50.178 20:30:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:30:50.178 * Looking for test storage... 00:30:50.178 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:30:50.178 20:30:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:30:50.178 20:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:30:50.178 20:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:50.178 20:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:50.178 20:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:50.178 20:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:50.178 20:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:50.178 20:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:50.178 20:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:50.178 20:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:50.178 20:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:50.178 20:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:50.178 20:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:30:50.178 20:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:30:50.178 20:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:50.178 20:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:50.178 20:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:30:50.178 20:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:50.178 20:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:50.178 20:30:38 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:50.178 20:30:38 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:50.178 20:30:38 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:50.178 20:30:38 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:50.178 20:30:38 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:50.178 20:30:38 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:50.178 20:30:38 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:30:50.178 20:30:38 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:50.178 20:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:30:50.178 20:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:50.178 20:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:50.178 20:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:50.178 20:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:50.178 20:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:50.178 20:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:50.178 20:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:50.178 20:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:50.178 20:30:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:30:50.178 20:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:50.178 20:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:50.178 20:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:50.178 20:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:50.178 20:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:50.178 20:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:50.178 20:30:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:50.178 20:30:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:50.178 20:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:30:50.178 20:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:30:50.178 20:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:30:50.178 20:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:30:50.178 20:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:30:50.178 20:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # nvmf_veth_init 00:30:50.178 20:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:50.178 20:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:50.178 20:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:30:50.178 20:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:30:50.178 20:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:30:50.178 20:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:30:50.178 20:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:30:50.178 20:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:50.178 20:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:30:50.178 20:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:30:50.178 20:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:30:50.178 20:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:30:50.178 20:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:30:50.178 20:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:30:50.178 Cannot find device "nvmf_tgt_br" 00:30:50.178 20:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # true 00:30:50.178 20:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:30:50.178 Cannot find device "nvmf_tgt_br2" 00:30:50.178 20:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # true 00:30:50.178 20:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:30:50.178 20:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:30:50.178 Cannot find device "nvmf_tgt_br" 00:30:50.178 20:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # true 00:30:50.178 20:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:30:50.178 Cannot find device "nvmf_tgt_br2" 00:30:50.178 20:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # true 00:30:50.178 20:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:30:50.178 20:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:30:50.178 20:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:50.178 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:50.178 20:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:30:50.179 20:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:50.179 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:50.179 20:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:30:50.179 20:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:30:50.179 20:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:30:50.179 20:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:30:50.179 20:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:30:50.179 20:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:30:50.179 20:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:30:50.179 20:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:30:50.179 20:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:30:50.179 20:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:30:50.179 20:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:30:50.179 20:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:30:50.179 20:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:30:50.179 20:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:30:50.179 20:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:30:50.179 20:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:30:50.179 20:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:30:50.179 20:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:30:50.179 20:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:30:50.179 20:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:30:50.179 20:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:30:50.179 20:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:30:50.179 20:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:30:50.179 20:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:30:50.179 20:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:30:50.179 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:50.179 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.139 ms 00:30:50.179 00:30:50.179 --- 10.0.0.2 ping statistics --- 00:30:50.179 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:50.179 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:30:50.179 20:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:30:50.179 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:30:50.179 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:30:50.179 00:30:50.179 --- 10.0.0.3 ping statistics --- 00:30:50.179 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:50.179 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:30:50.179 20:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:30:50.179 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:50.179 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:30:50.179 00:30:50.179 --- 10.0.0.1 ping statistics --- 00:30:50.179 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:50.179 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:30:50.179 20:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:50.179 20:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@433 -- # return 0 00:30:50.179 20:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:30:50.179 20:30:38 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:30:50.437 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:50.695 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:30:50.695 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:30:50.695 20:30:39 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:50.695 20:30:39 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:50.695 20:30:39 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:50.695 20:30:39 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:50.695 20:30:39 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:50.695 20:30:39 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:50.695 20:30:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:30:50.695 20:30:39 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:50.695 20:30:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@720 -- # xtrace_disable 00:30:50.695 20:30:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:50.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:50.695 20:30:39 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=117738 00:30:50.695 20:30:39 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 117738 00:30:50.695 20:30:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@827 -- # '[' -z 117738 ']' 00:30:50.695 20:30:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:50.695 20:30:39 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:30:50.695 20:30:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:50.695 20:30:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:50.695 20:30:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:50.695 20:30:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:50.695 [2024-07-14 20:30:39.765012] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:30:50.695 [2024-07-14 20:30:39.765145] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:50.953 [2024-07-14 20:30:39.908315] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:50.953 [2024-07-14 20:30:40.002413] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:50.953 [2024-07-14 20:30:40.002842] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:50.953 [2024-07-14 20:30:40.003127] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:50.953 [2024-07-14 20:30:40.003359] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:50.953 [2024-07-14 20:30:40.003509] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:50.953 [2024-07-14 20:30:40.003757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:50.953 [2024-07-14 20:30:40.003850] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:50.953 [2024-07-14 20:30:40.003960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:50.953 [2024-07-14 20:30:40.003965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:51.212 20:30:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:51.212 20:30:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # return 0 00:30:51.212 20:30:40 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:51.212 20:30:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:51.212 20:30:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:51.212 20:30:40 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:51.212 20:30:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:30:51.212 20:30:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:30:51.212 20:30:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:30:51.212 20:30:40 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:30:51.212 20:30:40 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:30:51.212 20:30:40 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n '' ]] 00:30:51.212 20:30:40 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:30:51.212 20:30:40 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:30:51.212 20:30:40 nvmf_abort_qd_sizes -- scripts/common.sh@295 -- # local bdf= 00:30:51.212 20:30:40 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:30:51.212 20:30:40 nvmf_abort_qd_sizes -- scripts/common.sh@230 -- # local class 00:30:51.212 20:30:40 nvmf_abort_qd_sizes -- scripts/common.sh@231 -- # local subclass 00:30:51.212 20:30:40 nvmf_abort_qd_sizes -- scripts/common.sh@232 -- # local progif 00:30:51.212 20:30:40 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # printf %02x 1 00:30:51.212 20:30:40 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # class=01 00:30:51.212 20:30:40 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # printf %02x 8 00:30:51.212 20:30:40 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # subclass=08 00:30:51.212 20:30:40 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # printf %02x 2 00:30:51.212 20:30:40 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # progif=02 00:30:51.212 20:30:40 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # hash lspci 00:30:51.212 20:30:40 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:30:51.212 20:30:40 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # grep -i -- -p02 00:30:51.212 20:30:40 nvmf_abort_qd_sizes -- scripts/common.sh@239 -- # lspci -mm -n -D 00:30:51.212 20:30:40 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:30:51.212 20:30:40 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # tr -d '"' 00:30:51.212 20:30:40 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:30:51.212 20:30:40 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:30:51.212 20:30:40 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:30:51.212 20:30:40 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:30:51.212 20:30:40 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:30:51.212 20:30:40 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:30:51.212 20:30:40 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:30:51.212 20:30:40 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:30:51.212 20:30:40 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:30:51.212 20:30:40 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:30:51.212 20:30:40 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:30:51.212 20:30:40 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:30:51.212 20:30:40 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:30:51.212 20:30:40 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:30:51.212 20:30:40 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:30:51.212 20:30:40 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:30:51.213 20:30:40 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:30:51.213 20:30:40 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:30:51.213 20:30:40 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:30:51.213 20:30:40 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:30:51.213 20:30:40 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:30:51.213 20:30:40 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:30:51.213 20:30:40 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:30:51.213 20:30:40 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:30:51.213 20:30:40 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 2 )) 00:30:51.213 20:30:40 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:30:51.213 20:30:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:30:51.213 20:30:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:30:51.213 20:30:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:30:51.213 20:30:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:30:51.213 20:30:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:51.213 20:30:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:51.213 ************************************ 00:30:51.213 START TEST spdk_target_abort 00:30:51.213 ************************************ 00:30:51.213 20:30:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1121 -- # spdk_target 00:30:51.213 20:30:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:30:51.213 20:30:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:30:51.213 20:30:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:51.213 20:30:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:51.472 spdk_targetn1 00:30:51.472 20:30:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:51.472 20:30:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:51.472 20:30:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:51.472 20:30:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:51.472 [2024-07-14 20:30:40.334347] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:51.472 20:30:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:51.472 20:30:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:30:51.472 20:30:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:51.472 20:30:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:51.472 20:30:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:51.472 20:30:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:30:51.473 20:30:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:51.473 20:30:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:51.473 20:30:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:51.473 20:30:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:30:51.473 20:30:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:51.473 20:30:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:51.473 [2024-07-14 20:30:40.362544] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:51.473 20:30:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:51.473 20:30:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:30:51.473 20:30:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:30:51.473 20:30:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:30:51.473 20:30:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:30:51.473 20:30:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:30:51.473 20:30:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:30:51.473 20:30:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:30:51.473 20:30:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:30:51.473 20:30:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:30:51.473 20:30:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:51.473 20:30:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:30:51.473 20:30:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:51.473 20:30:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:30:51.473 20:30:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:51.473 20:30:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:30:51.473 20:30:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:51.473 20:30:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:51.473 20:30:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:51.473 20:30:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:51.473 20:30:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:51.473 20:30:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:54.784 Initializing NVMe Controllers 00:30:54.784 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:30:54.784 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:54.784 Initialization complete. Launching workers. 00:30:54.784 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10553, failed: 0 00:30:54.784 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1132, failed to submit 9421 00:30:54.784 success 766, unsuccess 366, failed 0 00:30:54.784 20:30:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:54.784 20:30:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:58.110 Initializing NVMe Controllers 00:30:58.110 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:30:58.110 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:58.110 Initialization complete. Launching workers. 00:30:58.110 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 5890, failed: 0 00:30:58.110 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1213, failed to submit 4677 00:30:58.110 success 245, unsuccess 968, failed 0 00:30:58.110 20:30:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:58.110 20:30:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:01.393 Initializing NVMe Controllers 00:31:01.393 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:01.393 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:01.393 Initialization complete. Launching workers. 00:31:01.393 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 29371, failed: 0 00:31:01.393 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2605, failed to submit 26766 00:31:01.393 success 353, unsuccess 2252, failed 0 00:31:01.393 20:30:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:31:01.393 20:30:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.393 20:30:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:01.393 20:30:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.393 20:30:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:31:01.393 20:30:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.393 20:30:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:01.651 20:30:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.651 20:30:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 117738 00:31:01.651 20:30:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@946 -- # '[' -z 117738 ']' 00:31:01.651 20:30:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # kill -0 117738 00:31:01.651 20:30:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # uname 00:31:01.651 20:30:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:01.651 20:30:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 117738 00:31:01.651 killing process with pid 117738 00:31:01.651 20:30:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:31:01.651 20:30:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:31:01.651 20:30:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 117738' 00:31:01.651 20:30:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@965 -- # kill 117738 00:31:01.651 20:30:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@970 -- # wait 117738 00:31:01.910 ************************************ 00:31:01.910 END TEST spdk_target_abort 00:31:01.910 ************************************ 00:31:01.910 00:31:01.910 real 0m10.563s 00:31:01.910 user 0m40.441s 00:31:01.910 sys 0m1.789s 00:31:01.910 20:30:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:01.910 20:30:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:01.910 20:30:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:31:01.910 20:30:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:31:01.910 20:30:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:01.910 20:30:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:01.910 ************************************ 00:31:01.910 START TEST kernel_target_abort 00:31:01.910 ************************************ 00:31:01.910 20:30:50 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1121 -- # kernel_target 00:31:01.910 20:30:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:31:01.910 20:30:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:31:01.910 20:30:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:01.910 20:30:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:01.910 20:30:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:01.910 20:30:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:01.910 20:30:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:01.910 20:30:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:01.910 20:30:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:01.910 20:30:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:01.910 20:30:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:01.910 20:30:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:31:01.910 20:30:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:31:01.910 20:30:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:31:01.910 20:30:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:01.910 20:30:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:01.910 20:30:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:31:01.910 20:30:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:31:01.910 20:30:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:31:01.910 20:30:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:31:01.910 20:30:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:31:01.910 20:30:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:31:02.167 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:02.167 Waiting for block devices as requested 00:31:02.425 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:31:02.425 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:31:02.425 20:30:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:31:02.425 20:30:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:31:02.425 20:30:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:31:02.425 20:30:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:31:02.425 20:30:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:31:02.425 20:30:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:31:02.425 20:30:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:31:02.425 20:30:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:31:02.425 20:30:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:31:02.425 No valid GPT data, bailing 00:31:02.425 20:30:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:31:02.425 20:30:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:31:02.425 20:30:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:31:02.425 20:30:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:31:02.425 20:30:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:31:02.425 20:30:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:31:02.425 20:30:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:31:02.425 20:30:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1658 -- # local device=nvme0n2 00:31:02.425 20:30:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:31:02.425 20:30:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:31:02.425 20:30:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:31:02.425 20:30:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:31:02.425 20:30:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:31:02.684 No valid GPT data, bailing 00:31:02.684 20:30:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:31:02.684 20:30:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:31:02.684 20:30:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:31:02.684 20:30:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:31:02.684 20:30:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:31:02.684 20:30:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:31:02.684 20:30:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:31:02.684 20:30:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1658 -- # local device=nvme0n3 00:31:02.684 20:30:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:31:02.684 20:30:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:31:02.684 20:30:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:31:02.684 20:30:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:31:02.684 20:30:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:31:02.684 No valid GPT data, bailing 00:31:02.684 20:30:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:31:02.684 20:30:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:31:02.684 20:30:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:31:02.684 20:30:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:31:02.684 20:30:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:31:02.684 20:30:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:31:02.684 20:30:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:31:02.684 20:30:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:31:02.684 20:30:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:31:02.684 20:30:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:31:02.684 20:30:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:31:02.684 20:30:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:31:02.684 20:30:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:31:02.684 No valid GPT data, bailing 00:31:02.684 20:30:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:31:02.684 20:30:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:31:02.684 20:30:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:31:02.684 20:30:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:31:02.684 20:30:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:31:02.684 20:30:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:02.684 20:30:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:02.684 20:30:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:31:02.684 20:30:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:31:02.684 20:30:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:31:02.684 20:30:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:31:02.684 20:30:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:31:02.684 20:30:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:31:02.684 20:30:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:31:02.684 20:30:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:31:02.684 20:30:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:31:02.684 20:30:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:31:02.684 20:30:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 --hostid=caa3dfc1-79db-49e7-95fe-b9f6785698c4 -a 10.0.0.1 -t tcp -s 4420 00:31:02.684 00:31:02.684 Discovery Log Number of Records 2, Generation counter 2 00:31:02.684 =====Discovery Log Entry 0====== 00:31:02.684 trtype: tcp 00:31:02.684 adrfam: ipv4 00:31:02.684 subtype: current discovery subsystem 00:31:02.684 treq: not specified, sq flow control disable supported 00:31:02.684 portid: 1 00:31:02.684 trsvcid: 4420 00:31:02.684 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:31:02.684 traddr: 10.0.0.1 00:31:02.684 eflags: none 00:31:02.684 sectype: none 00:31:02.684 =====Discovery Log Entry 1====== 00:31:02.684 trtype: tcp 00:31:02.684 adrfam: ipv4 00:31:02.684 subtype: nvme subsystem 00:31:02.684 treq: not specified, sq flow control disable supported 00:31:02.684 portid: 1 00:31:02.684 trsvcid: 4420 00:31:02.684 subnqn: nqn.2016-06.io.spdk:testnqn 00:31:02.684 traddr: 10.0.0.1 00:31:02.684 eflags: none 00:31:02.684 sectype: none 00:31:02.684 20:30:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:31:02.684 20:30:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:31:02.684 20:30:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:31:02.684 20:30:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:31:02.684 20:30:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:31:02.684 20:30:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:31:02.684 20:30:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:31:02.684 20:30:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:31:02.684 20:30:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:31:02.684 20:30:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:02.684 20:30:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:31:02.684 20:30:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:02.684 20:30:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:31:02.684 20:30:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:02.684 20:30:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:31:02.684 20:30:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:02.684 20:30:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:31:02.684 20:30:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:02.684 20:30:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:02.684 20:30:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:02.684 20:30:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:05.968 Initializing NVMe Controllers 00:31:05.968 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:05.968 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:05.968 Initialization complete. Launching workers. 00:31:05.968 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 33720, failed: 0 00:31:05.968 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 33720, failed to submit 0 00:31:05.968 success 0, unsuccess 33720, failed 0 00:31:05.968 20:30:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:05.968 20:30:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:09.268 Initializing NVMe Controllers 00:31:09.268 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:09.268 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:09.268 Initialization complete. Launching workers. 00:31:09.268 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 65287, failed: 0 00:31:09.268 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 26688, failed to submit 38599 00:31:09.268 success 0, unsuccess 26688, failed 0 00:31:09.269 20:30:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:09.269 20:30:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:12.553 Initializing NVMe Controllers 00:31:12.553 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:12.553 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:12.553 Initialization complete. Launching workers. 00:31:12.553 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 75722, failed: 0 00:31:12.553 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 18900, failed to submit 56822 00:31:12.553 success 0, unsuccess 18900, failed 0 00:31:12.553 20:31:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:31:12.553 20:31:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:31:12.553 20:31:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:31:12.553 20:31:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:12.553 20:31:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:12.553 20:31:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:31:12.553 20:31:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:12.553 20:31:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:31:12.553 20:31:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:31:12.553 20:31:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:31:13.119 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:14.056 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:31:14.056 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:31:14.056 00:31:14.056 real 0m12.035s 00:31:14.056 user 0m5.519s 00:31:14.056 sys 0m3.852s 00:31:14.056 20:31:02 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:14.056 ************************************ 00:31:14.056 END TEST kernel_target_abort 00:31:14.056 ************************************ 00:31:14.056 20:31:02 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:14.056 20:31:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:31:14.056 20:31:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:31:14.056 20:31:02 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:14.056 20:31:02 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:31:14.056 20:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:14.056 20:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:31:14.056 20:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:14.056 20:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:14.056 rmmod nvme_tcp 00:31:14.056 rmmod nvme_fabrics 00:31:14.056 rmmod nvme_keyring 00:31:14.056 20:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:14.056 20:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:31:14.056 20:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:31:14.056 20:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 117738 ']' 00:31:14.056 20:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 117738 00:31:14.056 20:31:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@946 -- # '[' -z 117738 ']' 00:31:14.056 20:31:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # kill -0 117738 00:31:14.056 Process with pid 117738 is not found 00:31:14.056 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (117738) - No such process 00:31:14.056 20:31:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@973 -- # echo 'Process with pid 117738 is not found' 00:31:14.056 20:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:31:14.056 20:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:31:14.624 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:14.624 Waiting for block devices as requested 00:31:14.624 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:31:14.624 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:31:14.624 20:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:14.624 20:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:14.624 20:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:14.624 20:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:14.624 20:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:14.624 20:31:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:14.624 20:31:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:14.624 20:31:03 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:31:14.624 ************************************ 00:31:14.624 END TEST nvmf_abort_qd_sizes 00:31:14.624 ************************************ 00:31:14.624 00:31:14.624 real 0m25.324s 00:31:14.624 user 0m46.954s 00:31:14.624 sys 0m6.979s 00:31:14.625 20:31:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:14.625 20:31:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:14.882 20:31:03 -- spdk/autotest.sh@295 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:31:14.882 20:31:03 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:31:14.882 20:31:03 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:14.882 20:31:03 -- common/autotest_common.sh@10 -- # set +x 00:31:14.882 ************************************ 00:31:14.882 START TEST keyring_file 00:31:14.882 ************************************ 00:31:14.882 20:31:03 keyring_file -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:31:14.882 * Looking for test storage... 00:31:14.883 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:31:14.883 20:31:03 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:31:14.883 20:31:03 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:31:14.883 20:31:03 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:31:14.883 20:31:03 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:14.883 20:31:03 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:14.883 20:31:03 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:14.883 20:31:03 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:14.883 20:31:03 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:14.883 20:31:03 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:14.883 20:31:03 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:14.883 20:31:03 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:14.883 20:31:03 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:14.883 20:31:03 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:14.883 20:31:03 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:31:14.883 20:31:03 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:31:14.883 20:31:03 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:14.883 20:31:03 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:14.883 20:31:03 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:31:14.883 20:31:03 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:14.883 20:31:03 keyring_file -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:14.883 20:31:03 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:14.883 20:31:03 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:14.883 20:31:03 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:14.883 20:31:03 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:14.883 20:31:03 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:14.883 20:31:03 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:14.883 20:31:03 keyring_file -- paths/export.sh@5 -- # export PATH 00:31:14.883 20:31:03 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:14.883 20:31:03 keyring_file -- nvmf/common.sh@47 -- # : 0 00:31:14.883 20:31:03 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:14.883 20:31:03 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:14.883 20:31:03 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:14.883 20:31:03 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:14.883 20:31:03 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:14.883 20:31:03 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:14.883 20:31:03 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:14.883 20:31:03 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:14.883 20:31:03 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:31:14.883 20:31:03 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:31:14.883 20:31:03 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:31:14.883 20:31:03 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:31:14.883 20:31:03 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:31:14.883 20:31:03 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:31:14.883 20:31:03 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:31:14.883 20:31:03 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:31:14.883 20:31:03 keyring_file -- keyring/common.sh@17 -- # name=key0 00:31:14.883 20:31:03 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:31:14.883 20:31:03 keyring_file -- keyring/common.sh@17 -- # digest=0 00:31:14.883 20:31:03 keyring_file -- keyring/common.sh@18 -- # mktemp 00:31:14.883 20:31:03 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.v8oaPabaam 00:31:14.883 20:31:03 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:31:14.883 20:31:03 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:31:14.883 20:31:03 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:31:14.883 20:31:03 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:31:14.883 20:31:03 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:31:14.883 20:31:03 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:31:14.883 20:31:03 keyring_file -- nvmf/common.sh@705 -- # python - 00:31:14.883 20:31:03 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.v8oaPabaam 00:31:14.883 20:31:03 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.v8oaPabaam 00:31:14.883 20:31:03 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.v8oaPabaam 00:31:14.883 20:31:03 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:31:14.883 20:31:03 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:31:14.883 20:31:03 keyring_file -- keyring/common.sh@17 -- # name=key1 00:31:14.883 20:31:03 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:31:14.883 20:31:03 keyring_file -- keyring/common.sh@17 -- # digest=0 00:31:14.883 20:31:03 keyring_file -- keyring/common.sh@18 -- # mktemp 00:31:14.883 20:31:03 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.0QoOV3W3Yq 00:31:14.883 20:31:03 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:31:14.883 20:31:03 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:31:14.883 20:31:03 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:31:14.883 20:31:03 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:31:14.883 20:31:03 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:31:14.883 20:31:03 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:31:14.883 20:31:03 keyring_file -- nvmf/common.sh@705 -- # python - 00:31:14.883 20:31:03 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.0QoOV3W3Yq 00:31:14.883 20:31:03 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.0QoOV3W3Yq 00:31:15.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:15.141 20:31:03 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.0QoOV3W3Yq 00:31:15.141 20:31:03 keyring_file -- keyring/file.sh@30 -- # tgtpid=118599 00:31:15.141 20:31:03 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:15.141 20:31:03 keyring_file -- keyring/file.sh@32 -- # waitforlisten 118599 00:31:15.141 20:31:03 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 118599 ']' 00:31:15.141 20:31:03 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:15.141 20:31:03 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:15.141 20:31:03 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:15.141 20:31:03 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:15.141 20:31:03 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:15.141 [2024-07-14 20:31:04.036163] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:31:15.141 [2024-07-14 20:31:04.036501] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118599 ] 00:31:15.141 [2024-07-14 20:31:04.178457] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:15.399 [2024-07-14 20:31:04.276304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:15.965 20:31:05 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:15.965 20:31:05 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:31:15.965 20:31:05 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:31:15.965 20:31:05 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:15.965 20:31:05 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:15.965 [2024-07-14 20:31:05.034657] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:16.224 null0 00:31:16.224 [2024-07-14 20:31:05.066616] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:31:16.224 [2024-07-14 20:31:05.066905] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:31:16.224 [2024-07-14 20:31:05.074613] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:31:16.224 20:31:05 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:16.224 20:31:05 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:31:16.224 20:31:05 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:31:16.224 20:31:05 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:31:16.224 20:31:05 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:16.224 20:31:05 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:16.224 20:31:05 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:16.224 20:31:05 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:16.224 20:31:05 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:31:16.224 20:31:05 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:16.224 20:31:05 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:16.224 [2024-07-14 20:31:05.086607] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:31:16.224 2024/07/14 20:31:05 error on JSON-RPC call, method: nvmf_subsystem_add_listener, params: map[listen_address:map[traddr:127.0.0.1 trsvcid:4420 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode0 secure_channel:%!s(bool=false)], err: error received for nvmf_subsystem_add_listener method, err: Code=-32602 Msg=Invalid parameters 00:31:16.224 request: 00:31:16.224 { 00:31:16.224 "method": "nvmf_subsystem_add_listener", 00:31:16.224 "params": { 00:31:16.224 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:31:16.224 "secure_channel": false, 00:31:16.224 "listen_address": { 00:31:16.224 "trtype": "tcp", 00:31:16.224 "traddr": "127.0.0.1", 00:31:16.224 "trsvcid": "4420" 00:31:16.224 } 00:31:16.224 } 00:31:16.224 } 00:31:16.224 Got JSON-RPC error response 00:31:16.224 GoRPCClient: error on JSON-RPC call 00:31:16.224 20:31:05 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:16.224 20:31:05 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:31:16.224 20:31:05 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:16.224 20:31:05 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:16.224 20:31:05 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:16.224 20:31:05 keyring_file -- keyring/file.sh@46 -- # bperfpid=118634 00:31:16.224 20:31:05 keyring_file -- keyring/file.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:31:16.224 20:31:05 keyring_file -- keyring/file.sh@48 -- # waitforlisten 118634 /var/tmp/bperf.sock 00:31:16.224 20:31:05 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 118634 ']' 00:31:16.224 20:31:05 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:16.224 20:31:05 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:16.224 20:31:05 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:16.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:16.224 20:31:05 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:16.224 20:31:05 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:16.224 [2024-07-14 20:31:05.144345] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:31:16.224 [2024-07-14 20:31:05.144642] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118634 ] 00:31:16.224 [2024-07-14 20:31:05.281814] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:16.482 [2024-07-14 20:31:05.365147] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:17.056 20:31:06 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:17.056 20:31:06 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:31:17.056 20:31:06 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.v8oaPabaam 00:31:17.056 20:31:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.v8oaPabaam 00:31:17.314 20:31:06 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.0QoOV3W3Yq 00:31:17.314 20:31:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.0QoOV3W3Yq 00:31:17.572 20:31:06 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:31:17.572 20:31:06 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:31:17.572 20:31:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:17.572 20:31:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:17.572 20:31:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:17.831 20:31:06 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.v8oaPabaam == \/\t\m\p\/\t\m\p\.\v\8\o\a\P\a\b\a\a\m ]] 00:31:17.831 20:31:06 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:31:17.831 20:31:06 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:31:17.831 20:31:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:17.831 20:31:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:17.831 20:31:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:18.089 20:31:07 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.0QoOV3W3Yq == \/\t\m\p\/\t\m\p\.\0\Q\o\O\V\3\W\3\Y\q ]] 00:31:18.089 20:31:07 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:31:18.089 20:31:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:18.089 20:31:07 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:18.089 20:31:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:18.089 20:31:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:18.089 20:31:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:18.348 20:31:07 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:31:18.348 20:31:07 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:31:18.348 20:31:07 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:31:18.348 20:31:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:18.348 20:31:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:18.348 20:31:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:18.348 20:31:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:18.606 20:31:07 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:31:18.606 20:31:07 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:18.606 20:31:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:18.864 [2024-07-14 20:31:07.845693] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:18.864 nvme0n1 00:31:18.864 20:31:07 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:31:18.864 20:31:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:18.864 20:31:07 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:18.864 20:31:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:18.864 20:31:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:18.864 20:31:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:19.432 20:31:08 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:31:19.432 20:31:08 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:31:19.432 20:31:08 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:31:19.432 20:31:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:19.432 20:31:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:19.432 20:31:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:19.432 20:31:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:19.432 20:31:08 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:31:19.432 20:31:08 keyring_file -- keyring/file.sh@62 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:19.690 Running I/O for 1 seconds... 00:31:20.627 00:31:20.627 Latency(us) 00:31:20.627 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:20.627 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:31:20.627 nvme0n1 : 1.01 12912.30 50.44 0.00 0.00 9886.58 4379.00 16562.73 00:31:20.627 =================================================================================================================== 00:31:20.627 Total : 12912.30 50.44 0.00 0.00 9886.58 4379.00 16562.73 00:31:20.627 0 00:31:20.627 20:31:09 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:31:20.627 20:31:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:31:20.886 20:31:09 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:31:20.886 20:31:09 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:20.886 20:31:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:20.886 20:31:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:20.886 20:31:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:20.886 20:31:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:21.144 20:31:10 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:31:21.144 20:31:10 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:31:21.144 20:31:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:21.145 20:31:10 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:31:21.145 20:31:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:21.145 20:31:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:21.145 20:31:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:21.404 20:31:10 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:31:21.404 20:31:10 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:31:21.404 20:31:10 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:31:21.404 20:31:10 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:31:21.404 20:31:10 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:31:21.404 20:31:10 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:21.404 20:31:10 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:31:21.404 20:31:10 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:21.404 20:31:10 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:31:21.404 20:31:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:31:21.663 [2024-07-14 20:31:10.591450] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:31:21.663 [2024-07-14 20:31:10.591487] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10a9f10 (107): Transport endpoint is not connected 00:31:21.663 [2024-07-14 20:31:10.592475] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10a9f10 (9): Bad file descriptor 00:31:21.663 [2024-07-14 20:31:10.593472] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:21.663 [2024-07-14 20:31:10.593488] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:31:21.663 [2024-07-14 20:31:10.593497] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:21.663 2024/07/14 20:31:10 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 psk:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:31:21.663 request: 00:31:21.663 { 00:31:21.663 "method": "bdev_nvme_attach_controller", 00:31:21.663 "params": { 00:31:21.663 "name": "nvme0", 00:31:21.663 "trtype": "tcp", 00:31:21.663 "traddr": "127.0.0.1", 00:31:21.663 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:21.663 "adrfam": "ipv4", 00:31:21.663 "trsvcid": "4420", 00:31:21.663 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:21.663 "psk": "key1" 00:31:21.663 } 00:31:21.663 } 00:31:21.663 Got JSON-RPC error response 00:31:21.663 GoRPCClient: error on JSON-RPC call 00:31:21.663 20:31:10 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:31:21.663 20:31:10 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:21.663 20:31:10 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:21.663 20:31:10 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:21.663 20:31:10 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:31:21.663 20:31:10 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:21.663 20:31:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:21.663 20:31:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:21.663 20:31:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:21.663 20:31:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:21.923 20:31:10 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:31:21.923 20:31:10 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:31:21.923 20:31:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:21.923 20:31:10 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:31:21.923 20:31:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:21.923 20:31:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:21.923 20:31:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:22.185 20:31:11 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:31:22.185 20:31:11 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:31:22.185 20:31:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:31:22.444 20:31:11 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:31:22.444 20:31:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:31:22.703 20:31:11 keyring_file -- keyring/file.sh@77 -- # jq length 00:31:22.703 20:31:11 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:31:22.703 20:31:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:22.962 20:31:11 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:31:22.962 20:31:11 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.v8oaPabaam 00:31:22.962 20:31:11 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.v8oaPabaam 00:31:22.962 20:31:11 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:31:22.962 20:31:11 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.v8oaPabaam 00:31:22.962 20:31:11 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:31:22.962 20:31:11 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:22.962 20:31:11 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:31:22.962 20:31:11 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:22.962 20:31:11 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.v8oaPabaam 00:31:22.962 20:31:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.v8oaPabaam 00:31:23.221 [2024-07-14 20:31:12.213812] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.v8oaPabaam': 0100660 00:31:23.221 [2024-07-14 20:31:12.213905] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:31:23.221 2024/07/14 20:31:12 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.v8oaPabaam], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:31:23.221 request: 00:31:23.221 { 00:31:23.221 "method": "keyring_file_add_key", 00:31:23.221 "params": { 00:31:23.221 "name": "key0", 00:31:23.221 "path": "/tmp/tmp.v8oaPabaam" 00:31:23.221 } 00:31:23.221 } 00:31:23.221 Got JSON-RPC error response 00:31:23.221 GoRPCClient: error on JSON-RPC call 00:31:23.221 20:31:12 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:31:23.221 20:31:12 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:23.221 20:31:12 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:23.221 20:31:12 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:23.221 20:31:12 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.v8oaPabaam 00:31:23.221 20:31:12 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.v8oaPabaam 00:31:23.221 20:31:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.v8oaPabaam 00:31:23.479 20:31:12 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.v8oaPabaam 00:31:23.479 20:31:12 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:31:23.479 20:31:12 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:23.479 20:31:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:23.479 20:31:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:23.479 20:31:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:23.479 20:31:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:23.738 20:31:12 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:31:23.738 20:31:12 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:23.738 20:31:12 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:31:23.738 20:31:12 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:23.738 20:31:12 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:31:23.738 20:31:12 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:23.738 20:31:12 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:31:23.738 20:31:12 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:23.738 20:31:12 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:23.738 20:31:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:24.001 [2024-07-14 20:31:13.001997] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.v8oaPabaam': No such file or directory 00:31:24.001 [2024-07-14 20:31:13.002041] nvme_tcp.c:2573:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:31:24.001 [2024-07-14 20:31:13.002067] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:31:24.001 [2024-07-14 20:31:13.002076] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:24.001 [2024-07-14 20:31:13.002085] bdev_nvme.c:6269:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:31:24.001 2024/07/14 20:31:13 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 psk:key0 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-19 Msg=No such device 00:31:24.001 request: 00:31:24.001 { 00:31:24.001 "method": "bdev_nvme_attach_controller", 00:31:24.001 "params": { 00:31:24.001 "name": "nvme0", 00:31:24.001 "trtype": "tcp", 00:31:24.001 "traddr": "127.0.0.1", 00:31:24.001 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:24.001 "adrfam": "ipv4", 00:31:24.001 "trsvcid": "4420", 00:31:24.001 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:24.001 "psk": "key0" 00:31:24.001 } 00:31:24.001 } 00:31:24.001 Got JSON-RPC error response 00:31:24.001 GoRPCClient: error on JSON-RPC call 00:31:24.001 20:31:13 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:31:24.001 20:31:13 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:24.001 20:31:13 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:24.001 20:31:13 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:24.001 20:31:13 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:31:24.001 20:31:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:31:24.266 20:31:13 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:31:24.266 20:31:13 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:31:24.266 20:31:13 keyring_file -- keyring/common.sh@17 -- # name=key0 00:31:24.266 20:31:13 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:31:24.266 20:31:13 keyring_file -- keyring/common.sh@17 -- # digest=0 00:31:24.266 20:31:13 keyring_file -- keyring/common.sh@18 -- # mktemp 00:31:24.266 20:31:13 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.4mOd9aqnhr 00:31:24.266 20:31:13 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:31:24.266 20:31:13 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:31:24.266 20:31:13 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:31:24.266 20:31:13 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:31:24.266 20:31:13 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:31:24.266 20:31:13 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:31:24.266 20:31:13 keyring_file -- nvmf/common.sh@705 -- # python - 00:31:24.266 20:31:13 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.4mOd9aqnhr 00:31:24.266 20:31:13 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.4mOd9aqnhr 00:31:24.266 20:31:13 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.4mOd9aqnhr 00:31:24.266 20:31:13 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.4mOd9aqnhr 00:31:24.266 20:31:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.4mOd9aqnhr 00:31:24.526 20:31:13 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:24.526 20:31:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:24.783 nvme0n1 00:31:24.783 20:31:13 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:31:24.783 20:31:13 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:24.783 20:31:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:24.783 20:31:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:24.783 20:31:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:24.783 20:31:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:25.041 20:31:14 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:31:25.041 20:31:14 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:31:25.041 20:31:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:31:25.299 20:31:14 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:31:25.299 20:31:14 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:31:25.299 20:31:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:25.300 20:31:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:25.300 20:31:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:25.558 20:31:14 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:31:25.558 20:31:14 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:31:25.558 20:31:14 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:25.558 20:31:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:25.558 20:31:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:25.558 20:31:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:25.558 20:31:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:25.817 20:31:14 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:31:25.817 20:31:14 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:31:25.817 20:31:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:31:26.077 20:31:14 keyring_file -- keyring/file.sh@104 -- # jq length 00:31:26.077 20:31:14 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:31:26.077 20:31:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:26.336 20:31:15 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:31:26.336 20:31:15 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.4mOd9aqnhr 00:31:26.336 20:31:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.4mOd9aqnhr 00:31:26.595 20:31:15 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.0QoOV3W3Yq 00:31:26.595 20:31:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.0QoOV3W3Yq 00:31:26.595 20:31:15 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:26.595 20:31:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:26.853 nvme0n1 00:31:27.113 20:31:15 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:31:27.113 20:31:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:31:27.372 20:31:16 keyring_file -- keyring/file.sh@112 -- # config='{ 00:31:27.372 "subsystems": [ 00:31:27.372 { 00:31:27.372 "subsystem": "keyring", 00:31:27.372 "config": [ 00:31:27.372 { 00:31:27.372 "method": "keyring_file_add_key", 00:31:27.372 "params": { 00:31:27.372 "name": "key0", 00:31:27.372 "path": "/tmp/tmp.4mOd9aqnhr" 00:31:27.372 } 00:31:27.372 }, 00:31:27.372 { 00:31:27.372 "method": "keyring_file_add_key", 00:31:27.372 "params": { 00:31:27.372 "name": "key1", 00:31:27.372 "path": "/tmp/tmp.0QoOV3W3Yq" 00:31:27.372 } 00:31:27.372 } 00:31:27.372 ] 00:31:27.372 }, 00:31:27.372 { 00:31:27.372 "subsystem": "iobuf", 00:31:27.372 "config": [ 00:31:27.372 { 00:31:27.372 "method": "iobuf_set_options", 00:31:27.372 "params": { 00:31:27.372 "large_bufsize": 135168, 00:31:27.372 "large_pool_count": 1024, 00:31:27.372 "small_bufsize": 8192, 00:31:27.372 "small_pool_count": 8192 00:31:27.372 } 00:31:27.372 } 00:31:27.372 ] 00:31:27.372 }, 00:31:27.372 { 00:31:27.372 "subsystem": "sock", 00:31:27.372 "config": [ 00:31:27.372 { 00:31:27.372 "method": "sock_set_default_impl", 00:31:27.372 "params": { 00:31:27.372 "impl_name": "posix" 00:31:27.372 } 00:31:27.372 }, 00:31:27.372 { 00:31:27.372 "method": "sock_impl_set_options", 00:31:27.372 "params": { 00:31:27.372 "enable_ktls": false, 00:31:27.372 "enable_placement_id": 0, 00:31:27.372 "enable_quickack": false, 00:31:27.372 "enable_recv_pipe": true, 00:31:27.372 "enable_zerocopy_send_client": false, 00:31:27.372 "enable_zerocopy_send_server": true, 00:31:27.372 "impl_name": "ssl", 00:31:27.372 "recv_buf_size": 4096, 00:31:27.372 "send_buf_size": 4096, 00:31:27.372 "tls_version": 0, 00:31:27.372 "zerocopy_threshold": 0 00:31:27.372 } 00:31:27.372 }, 00:31:27.372 { 00:31:27.372 "method": "sock_impl_set_options", 00:31:27.372 "params": { 00:31:27.372 "enable_ktls": false, 00:31:27.372 "enable_placement_id": 0, 00:31:27.372 "enable_quickack": false, 00:31:27.372 "enable_recv_pipe": true, 00:31:27.372 "enable_zerocopy_send_client": false, 00:31:27.372 "enable_zerocopy_send_server": true, 00:31:27.372 "impl_name": "posix", 00:31:27.372 "recv_buf_size": 2097152, 00:31:27.372 "send_buf_size": 2097152, 00:31:27.372 "tls_version": 0, 00:31:27.372 "zerocopy_threshold": 0 00:31:27.372 } 00:31:27.372 } 00:31:27.372 ] 00:31:27.372 }, 00:31:27.372 { 00:31:27.372 "subsystem": "vmd", 00:31:27.372 "config": [] 00:31:27.372 }, 00:31:27.373 { 00:31:27.373 "subsystem": "accel", 00:31:27.373 "config": [ 00:31:27.373 { 00:31:27.373 "method": "accel_set_options", 00:31:27.373 "params": { 00:31:27.373 "buf_count": 2048, 00:31:27.373 "large_cache_size": 16, 00:31:27.373 "sequence_count": 2048, 00:31:27.373 "small_cache_size": 128, 00:31:27.373 "task_count": 2048 00:31:27.373 } 00:31:27.373 } 00:31:27.373 ] 00:31:27.373 }, 00:31:27.373 { 00:31:27.373 "subsystem": "bdev", 00:31:27.373 "config": [ 00:31:27.373 { 00:31:27.373 "method": "bdev_set_options", 00:31:27.373 "params": { 00:31:27.373 "bdev_auto_examine": true, 00:31:27.373 "bdev_io_cache_size": 256, 00:31:27.373 "bdev_io_pool_size": 65535, 00:31:27.373 "iobuf_large_cache_size": 16, 00:31:27.373 "iobuf_small_cache_size": 128 00:31:27.373 } 00:31:27.373 }, 00:31:27.373 { 00:31:27.373 "method": "bdev_raid_set_options", 00:31:27.373 "params": { 00:31:27.373 "process_window_size_kb": 1024 00:31:27.373 } 00:31:27.373 }, 00:31:27.373 { 00:31:27.373 "method": "bdev_iscsi_set_options", 00:31:27.373 "params": { 00:31:27.373 "timeout_sec": 30 00:31:27.373 } 00:31:27.373 }, 00:31:27.373 { 00:31:27.373 "method": "bdev_nvme_set_options", 00:31:27.373 "params": { 00:31:27.373 "action_on_timeout": "none", 00:31:27.373 "allow_accel_sequence": false, 00:31:27.373 "arbitration_burst": 0, 00:31:27.373 "bdev_retry_count": 3, 00:31:27.373 "ctrlr_loss_timeout_sec": 0, 00:31:27.373 "delay_cmd_submit": true, 00:31:27.373 "dhchap_dhgroups": [ 00:31:27.373 "null", 00:31:27.373 "ffdhe2048", 00:31:27.373 "ffdhe3072", 00:31:27.373 "ffdhe4096", 00:31:27.373 "ffdhe6144", 00:31:27.373 "ffdhe8192" 00:31:27.373 ], 00:31:27.373 "dhchap_digests": [ 00:31:27.373 "sha256", 00:31:27.373 "sha384", 00:31:27.373 "sha512" 00:31:27.373 ], 00:31:27.373 "disable_auto_failback": false, 00:31:27.373 "fast_io_fail_timeout_sec": 0, 00:31:27.373 "generate_uuids": false, 00:31:27.373 "high_priority_weight": 0, 00:31:27.373 "io_path_stat": false, 00:31:27.373 "io_queue_requests": 512, 00:31:27.373 "keep_alive_timeout_ms": 10000, 00:31:27.373 "low_priority_weight": 0, 00:31:27.373 "medium_priority_weight": 0, 00:31:27.373 "nvme_adminq_poll_period_us": 10000, 00:31:27.373 "nvme_error_stat": false, 00:31:27.373 "nvme_ioq_poll_period_us": 0, 00:31:27.373 "rdma_cm_event_timeout_ms": 0, 00:31:27.373 "rdma_max_cq_size": 0, 00:31:27.373 "rdma_srq_size": 0, 00:31:27.373 "reconnect_delay_sec": 0, 00:31:27.373 "timeout_admin_us": 0, 00:31:27.373 "timeout_us": 0, 00:31:27.373 "transport_ack_timeout": 0, 00:31:27.373 "transport_retry_count": 4, 00:31:27.373 "transport_tos": 0 00:31:27.373 } 00:31:27.373 }, 00:31:27.373 { 00:31:27.373 "method": "bdev_nvme_attach_controller", 00:31:27.373 "params": { 00:31:27.373 "adrfam": "IPv4", 00:31:27.373 "ctrlr_loss_timeout_sec": 0, 00:31:27.373 "ddgst": false, 00:31:27.373 "fast_io_fail_timeout_sec": 0, 00:31:27.373 "hdgst": false, 00:31:27.373 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:27.373 "name": "nvme0", 00:31:27.373 "prchk_guard": false, 00:31:27.373 "prchk_reftag": false, 00:31:27.373 "psk": "key0", 00:31:27.373 "reconnect_delay_sec": 0, 00:31:27.373 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:27.373 "traddr": "127.0.0.1", 00:31:27.373 "trsvcid": "4420", 00:31:27.373 "trtype": "TCP" 00:31:27.373 } 00:31:27.373 }, 00:31:27.373 { 00:31:27.373 "method": "bdev_nvme_set_hotplug", 00:31:27.373 "params": { 00:31:27.373 "enable": false, 00:31:27.373 "period_us": 100000 00:31:27.373 } 00:31:27.373 }, 00:31:27.373 { 00:31:27.373 "method": "bdev_wait_for_examine" 00:31:27.373 } 00:31:27.373 ] 00:31:27.373 }, 00:31:27.373 { 00:31:27.373 "subsystem": "nbd", 00:31:27.373 "config": [] 00:31:27.373 } 00:31:27.373 ] 00:31:27.373 }' 00:31:27.373 20:31:16 keyring_file -- keyring/file.sh@114 -- # killprocess 118634 00:31:27.373 20:31:16 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 118634 ']' 00:31:27.373 20:31:16 keyring_file -- common/autotest_common.sh@950 -- # kill -0 118634 00:31:27.373 20:31:16 keyring_file -- common/autotest_common.sh@951 -- # uname 00:31:27.373 20:31:16 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:27.373 20:31:16 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 118634 00:31:27.373 20:31:16 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:31:27.373 20:31:16 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:31:27.373 killing process with pid 118634 00:31:27.373 20:31:16 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 118634' 00:31:27.373 20:31:16 keyring_file -- common/autotest_common.sh@965 -- # kill 118634 00:31:27.373 Received shutdown signal, test time was about 1.000000 seconds 00:31:27.373 00:31:27.373 Latency(us) 00:31:27.373 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:27.373 =================================================================================================================== 00:31:27.373 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:27.373 20:31:16 keyring_file -- common/autotest_common.sh@970 -- # wait 118634 00:31:27.632 20:31:16 keyring_file -- keyring/file.sh@117 -- # bperfpid=119095 00:31:27.632 20:31:16 keyring_file -- keyring/file.sh@119 -- # waitforlisten 119095 /var/tmp/bperf.sock 00:31:27.632 20:31:16 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 119095 ']' 00:31:27.632 20:31:16 keyring_file -- keyring/file.sh@115 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:31:27.632 20:31:16 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:31:27.632 "subsystems": [ 00:31:27.632 { 00:31:27.632 "subsystem": "keyring", 00:31:27.632 "config": [ 00:31:27.632 { 00:31:27.632 "method": "keyring_file_add_key", 00:31:27.632 "params": { 00:31:27.632 "name": "key0", 00:31:27.632 "path": "/tmp/tmp.4mOd9aqnhr" 00:31:27.632 } 00:31:27.632 }, 00:31:27.632 { 00:31:27.632 "method": "keyring_file_add_key", 00:31:27.632 "params": { 00:31:27.632 "name": "key1", 00:31:27.632 "path": "/tmp/tmp.0QoOV3W3Yq" 00:31:27.632 } 00:31:27.632 } 00:31:27.632 ] 00:31:27.632 }, 00:31:27.632 { 00:31:27.632 "subsystem": "iobuf", 00:31:27.632 "config": [ 00:31:27.632 { 00:31:27.632 "method": "iobuf_set_options", 00:31:27.632 "params": { 00:31:27.632 "large_bufsize": 135168, 00:31:27.632 "large_pool_count": 1024, 00:31:27.632 "small_bufsize": 8192, 00:31:27.632 "small_pool_count": 8192 00:31:27.632 } 00:31:27.632 } 00:31:27.633 ] 00:31:27.633 }, 00:31:27.633 { 00:31:27.633 "subsystem": "sock", 00:31:27.633 "config": [ 00:31:27.633 { 00:31:27.633 "method": "sock_set_default_impl", 00:31:27.633 "params": { 00:31:27.633 "impl_name": "posix" 00:31:27.633 } 00:31:27.633 }, 00:31:27.633 { 00:31:27.633 "method": "sock_impl_set_options", 00:31:27.633 "params": { 00:31:27.633 "enable_ktls": false, 00:31:27.633 "enable_placement_id": 0, 00:31:27.633 "enable_quickack": false, 00:31:27.633 "enable_recv_pipe": true, 00:31:27.633 "enable_zerocopy_send_client": false, 00:31:27.633 "enable_zerocopy_send_server": true, 00:31:27.633 "impl_name": "ssl", 00:31:27.633 "recv_buf_size": 4096, 00:31:27.633 "send_buf_size": 4096, 00:31:27.633 "tls_version": 0, 00:31:27.633 "zerocopy_threshold": 0 00:31:27.633 } 00:31:27.633 }, 00:31:27.633 { 00:31:27.633 "method": "sock_impl_set_options", 00:31:27.633 "params": { 00:31:27.633 "enable_ktls": false, 00:31:27.633 "enable_placement_id": 0, 00:31:27.633 "enable_quickack": false, 00:31:27.633 "enable_recv_pipe": true, 00:31:27.633 "enable_zerocopy_send_client": false, 00:31:27.633 "enable_zerocopy_send_server": true, 00:31:27.633 "impl_name": "posix", 00:31:27.633 "recv_buf_size": 2097152, 00:31:27.633 "send_buf_size": 2097152, 00:31:27.633 "tls_version": 0, 00:31:27.633 "zerocopy_threshold": 0 00:31:27.633 } 00:31:27.633 } 00:31:27.633 ] 00:31:27.633 }, 00:31:27.633 { 00:31:27.633 "subsystem": "vmd", 00:31:27.633 "config": [] 00:31:27.633 }, 00:31:27.633 { 00:31:27.633 "subsystem": "accel", 00:31:27.633 "config": [ 00:31:27.633 { 00:31:27.633 "method": "accel_set_options", 00:31:27.633 "params": { 00:31:27.633 "buf_count": 2048, 00:31:27.633 "large_cache_size": 16, 00:31:27.633 "sequence_count": 2048, 00:31:27.633 "small_cache_size": 128, 00:31:27.633 "task_count": 2048 00:31:27.633 } 00:31:27.633 } 00:31:27.633 ] 00:31:27.633 }, 00:31:27.633 { 00:31:27.633 "subsystem": "bdev", 00:31:27.633 "config": [ 00:31:27.633 { 00:31:27.633 "method": "bdev_set_options", 00:31:27.633 "params": { 00:31:27.633 "bdev_auto_examine": true, 00:31:27.633 "bdev_io_cache_size": 256, 00:31:27.633 "bdev_io_pool_size": 65535, 00:31:27.633 "iobuf_large_cache_size": 16, 00:31:27.633 "iobuf_small_cache_size": 128 00:31:27.633 } 00:31:27.633 }, 00:31:27.633 { 00:31:27.633 "method": "bdev_raid_set_options", 00:31:27.633 "params": { 00:31:27.633 "process_window_size_kb": 1024 00:31:27.633 } 00:31:27.633 }, 00:31:27.633 { 00:31:27.633 "method": "bdev_iscsi_set_options", 00:31:27.633 "params": { 00:31:27.633 "timeout_sec": 30 00:31:27.633 } 00:31:27.633 }, 00:31:27.633 { 00:31:27.633 "method": "bdev_nvme_set_options", 00:31:27.633 "params": { 00:31:27.633 "action_on_timeout": "none", 00:31:27.633 "allow_accel_sequence": false, 00:31:27.633 "arbitration_burst": 0, 00:31:27.633 "bdev_retry_count": 3, 00:31:27.633 "ctrlr_loss_timeout_sec": 0, 00:31:27.633 "delay_cmd_submit": true, 00:31:27.633 "dhchap_dhgroups": [ 00:31:27.633 "null", 00:31:27.633 "ffdhe2048", 00:31:27.633 "ffdhe3072", 00:31:27.633 "ffdhe4096", 00:31:27.633 "ffdhe6144", 00:31:27.633 "ffdhe8192" 00:31:27.633 ], 00:31:27.633 "dhchap_digests": [ 00:31:27.633 "sha256", 00:31:27.633 "sha384", 00:31:27.633 "sha512" 00:31:27.633 ], 00:31:27.633 "disable_auto_failback": false, 00:31:27.633 "fast_io_fail_timeout_sec": 0, 00:31:27.633 "generate_uuids": false, 00:31:27.633 "high_priority_weight": 0, 00:31:27.633 "io_path_stat": false, 00:31:27.633 "io_queue_requests": 512, 00:31:27.633 "keep_alive_timeout_ms": 10000, 00:31:27.633 "low_priority_weight": 0, 00:31:27.633 "medium_priority_weight": 0, 00:31:27.633 "nvme_adminq_poll_period_us": 10000, 00:31:27.633 "nvme_error_stat": false, 00:31:27.633 "nvme_ioq_poll_period_us": 0, 00:31:27.633 "rdma_cm_event_timeout_ms": 0, 00:31:27.633 "rdma_max_cq_size": 0, 00:31:27.633 "rdma_srq_size": 0, 00:31:27.633 "reconnect_delay_sec": 0, 00:31:27.633 "timeout_admin_us": 0, 00:31:27.633 "timeout_us": 0, 00:31:27.633 "transport_ack_timeout": 0, 00:31:27.633 "transport_retry_count": 4, 00:31:27.633 "transport_tos": 0 00:31:27.633 } 00:31:27.633 }, 00:31:27.633 { 00:31:27.633 "method": "bdev_nvme_attach_controller", 00:31:27.633 "params": { 00:31:27.633 "adrfam": "IPv4", 00:31:27.633 "ctrlr_loss_timeout_sec": 0, 00:31:27.633 "ddgst": false, 00:31:27.633 "fast_io_fail_timeout_sec": 0, 00:31:27.633 "hdgst": false, 00:31:27.633 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:27.633 "name": "nvme0", 00:31:27.633 "prchk_guard": false, 00:31:27.633 "prchk_reftag": false, 00:31:27.633 "psk": "key0", 00:31:27.633 "reconnect_delay_sec": 0, 00:31:27.633 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:27.633 "traddr": "127.0.0.1", 00:31:27.633 "trsvcid": "4420", 00:31:27.633 "trtype": "TCP" 00:31:27.633 } 00:31:27.633 }, 00:31:27.633 { 00:31:27.633 "method": "bdev_nvme_set_hotplug", 00:31:27.633 "params": { 00:31:27.633 "enable": false, 00:31:27.633 "period_us": 100000 00:31:27.633 } 00:31:27.633 }, 00:31:27.633 { 00:31:27.633 "method": "bdev_wait_for_examine" 00:31:27.633 } 00:31:27.633 ] 00:31:27.633 }, 00:31:27.633 { 00:31:27.633 "subsystem": "nbd", 00:31:27.633 "config": [] 00:31:27.633 } 00:31:27.633 ] 00:31:27.633 }' 00:31:27.633 20:31:16 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:27.633 20:31:16 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:27.633 20:31:16 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:27.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:27.633 20:31:16 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:27.633 20:31:16 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:27.633 [2024-07-14 20:31:16.508891] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:31:27.633 [2024-07-14 20:31:16.509001] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119095 ] 00:31:27.633 [2024-07-14 20:31:16.643105] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:27.892 [2024-07-14 20:31:16.740712] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:27.892 [2024-07-14 20:31:16.918709] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:28.459 20:31:17 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:28.459 20:31:17 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:31:28.459 20:31:17 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:31:28.459 20:31:17 keyring_file -- keyring/file.sh@120 -- # jq length 00:31:28.459 20:31:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:28.718 20:31:17 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:31:28.718 20:31:17 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:31:28.718 20:31:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:28.718 20:31:17 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:28.718 20:31:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:28.718 20:31:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:28.718 20:31:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:28.976 20:31:17 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:31:28.976 20:31:17 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:31:28.976 20:31:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:28.976 20:31:17 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:31:28.976 20:31:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:28.976 20:31:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:28.976 20:31:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:29.235 20:31:18 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:31:29.235 20:31:18 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:31:29.235 20:31:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:31:29.235 20:31:18 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:31:29.493 20:31:18 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:31:29.493 20:31:18 keyring_file -- keyring/file.sh@1 -- # cleanup 00:31:29.493 20:31:18 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.4mOd9aqnhr /tmp/tmp.0QoOV3W3Yq 00:31:29.493 20:31:18 keyring_file -- keyring/file.sh@20 -- # killprocess 119095 00:31:29.493 20:31:18 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 119095 ']' 00:31:29.493 20:31:18 keyring_file -- common/autotest_common.sh@950 -- # kill -0 119095 00:31:29.493 20:31:18 keyring_file -- common/autotest_common.sh@951 -- # uname 00:31:29.493 20:31:18 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:29.493 20:31:18 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 119095 00:31:29.493 20:31:18 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:31:29.493 20:31:18 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:31:29.493 killing process with pid 119095 00:31:29.493 20:31:18 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 119095' 00:31:29.493 Received shutdown signal, test time was about 1.000000 seconds 00:31:29.493 00:31:29.493 Latency(us) 00:31:29.493 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:29.493 =================================================================================================================== 00:31:29.493 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:31:29.493 20:31:18 keyring_file -- common/autotest_common.sh@965 -- # kill 119095 00:31:29.493 20:31:18 keyring_file -- common/autotest_common.sh@970 -- # wait 119095 00:31:29.752 20:31:18 keyring_file -- keyring/file.sh@21 -- # killprocess 118599 00:31:29.752 20:31:18 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 118599 ']' 00:31:29.752 20:31:18 keyring_file -- common/autotest_common.sh@950 -- # kill -0 118599 00:31:29.752 20:31:18 keyring_file -- common/autotest_common.sh@951 -- # uname 00:31:29.752 20:31:18 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:29.752 20:31:18 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 118599 00:31:29.752 20:31:18 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:31:29.752 20:31:18 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:31:29.752 killing process with pid 118599 00:31:29.752 20:31:18 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 118599' 00:31:29.752 20:31:18 keyring_file -- common/autotest_common.sh@965 -- # kill 118599 00:31:29.752 [2024-07-14 20:31:18.685127] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:31:29.752 20:31:18 keyring_file -- common/autotest_common.sh@970 -- # wait 118599 00:31:30.320 00:31:30.320 real 0m15.518s 00:31:30.320 user 0m37.750s 00:31:30.320 sys 0m3.433s 00:31:30.320 20:31:19 keyring_file -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:30.320 20:31:19 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:30.320 ************************************ 00:31:30.320 END TEST keyring_file 00:31:30.320 ************************************ 00:31:30.320 20:31:19 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:31:30.320 20:31:19 -- spdk/autotest.sh@297 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:31:30.320 20:31:19 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:31:30.320 20:31:19 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:30.320 20:31:19 -- common/autotest_common.sh@10 -- # set +x 00:31:30.320 ************************************ 00:31:30.320 START TEST keyring_linux 00:31:30.320 ************************************ 00:31:30.320 20:31:19 keyring_linux -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:31:30.320 * Looking for test storage... 00:31:30.320 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:31:30.320 20:31:19 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:31:30.320 20:31:19 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:31:30.320 20:31:19 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:31:30.320 20:31:19 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:30.320 20:31:19 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:30.320 20:31:19 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:30.320 20:31:19 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:30.320 20:31:19 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:30.320 20:31:19 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:30.320 20:31:19 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:30.320 20:31:19 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:30.320 20:31:19 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:30.320 20:31:19 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:30.579 20:31:19 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:31:30.579 20:31:19 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=caa3dfc1-79db-49e7-95fe-b9f6785698c4 00:31:30.579 20:31:19 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:30.579 20:31:19 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:30.579 20:31:19 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:31:30.579 20:31:19 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:30.579 20:31:19 keyring_linux -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:30.579 20:31:19 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:30.579 20:31:19 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:30.579 20:31:19 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:30.579 20:31:19 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:30.579 20:31:19 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:30.579 20:31:19 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:30.579 20:31:19 keyring_linux -- paths/export.sh@5 -- # export PATH 00:31:30.579 20:31:19 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:30.579 20:31:19 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:31:30.579 20:31:19 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:30.579 20:31:19 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:30.579 20:31:19 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:30.579 20:31:19 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:30.579 20:31:19 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:30.579 20:31:19 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:30.579 20:31:19 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:30.579 20:31:19 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:30.579 20:31:19 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:31:30.579 20:31:19 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:31:30.579 20:31:19 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:31:30.579 20:31:19 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:31:30.579 20:31:19 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:31:30.579 20:31:19 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:31:30.579 20:31:19 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:31:30.579 20:31:19 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:31:30.579 20:31:19 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:31:30.579 20:31:19 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:31:30.579 20:31:19 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:31:30.579 20:31:19 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:31:30.579 20:31:19 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:31:30.579 20:31:19 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:31:30.579 20:31:19 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:31:30.579 20:31:19 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:31:30.579 20:31:19 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:31:30.579 20:31:19 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:31:30.579 20:31:19 keyring_linux -- nvmf/common.sh@705 -- # python - 00:31:30.579 20:31:19 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:31:30.579 /tmp/:spdk-test:key0 00:31:30.579 20:31:19 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:31:30.579 20:31:19 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:31:30.579 20:31:19 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:31:30.579 20:31:19 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:31:30.579 20:31:19 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:31:30.579 20:31:19 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:31:30.579 20:31:19 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:31:30.580 20:31:19 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:31:30.580 20:31:19 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:31:30.580 20:31:19 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:31:30.580 20:31:19 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:31:30.580 20:31:19 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:31:30.580 20:31:19 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:31:30.580 20:31:19 keyring_linux -- nvmf/common.sh@705 -- # python - 00:31:30.580 20:31:19 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:31:30.580 /tmp/:spdk-test:key1 00:31:30.580 20:31:19 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:31:30.580 20:31:19 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=119248 00:31:30.580 20:31:19 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 119248 00:31:30.580 20:31:19 keyring_linux -- common/autotest_common.sh@827 -- # '[' -z 119248 ']' 00:31:30.580 20:31:19 keyring_linux -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:30.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:30.580 20:31:19 keyring_linux -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:30.580 20:31:19 keyring_linux -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:30.580 20:31:19 keyring_linux -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:30.580 20:31:19 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:31:30.580 20:31:19 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:30.580 [2024-07-14 20:31:19.576347] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:31:30.580 [2024-07-14 20:31:19.576497] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119248 ] 00:31:30.839 [2024-07-14 20:31:19.713456] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:30.839 [2024-07-14 20:31:19.782619] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:31.774 20:31:20 keyring_linux -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:31.774 20:31:20 keyring_linux -- common/autotest_common.sh@860 -- # return 0 00:31:31.774 20:31:20 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:31:31.774 20:31:20 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:31.774 20:31:20 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:31:31.774 [2024-07-14 20:31:20.593911] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:31.774 null0 00:31:31.774 [2024-07-14 20:31:20.625875] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:31:31.774 [2024-07-14 20:31:20.626124] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:31:31.774 20:31:20 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:31.774 20:31:20 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:31:31.774 1024223484 00:31:31.774 20:31:20 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:31:31.774 292773155 00:31:31.774 20:31:20 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=119280 00:31:31.774 20:31:20 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:31:31.774 20:31:20 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 119280 /var/tmp/bperf.sock 00:31:31.774 20:31:20 keyring_linux -- common/autotest_common.sh@827 -- # '[' -z 119280 ']' 00:31:31.774 20:31:20 keyring_linux -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:31.774 20:31:20 keyring_linux -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:31.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:31.774 20:31:20 keyring_linux -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:31.774 20:31:20 keyring_linux -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:31.774 20:31:20 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:31:31.774 [2024-07-14 20:31:20.715007] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:31:31.774 [2024-07-14 20:31:20.715145] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119280 ] 00:31:31.774 [2024-07-14 20:31:20.858222] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:32.032 [2024-07-14 20:31:20.953711] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:32.597 20:31:21 keyring_linux -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:32.597 20:31:21 keyring_linux -- common/autotest_common.sh@860 -- # return 0 00:31:32.597 20:31:21 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:31:32.597 20:31:21 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:31:32.854 20:31:21 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:31:32.854 20:31:21 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:31:33.421 20:31:22 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:31:33.421 20:31:22 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:31:33.421 [2024-07-14 20:31:22.399721] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:33.421 nvme0n1 00:31:33.421 20:31:22 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:31:33.421 20:31:22 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:31:33.421 20:31:22 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:31:33.421 20:31:22 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:31:33.421 20:31:22 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:33.421 20:31:22 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:31:33.679 20:31:22 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:31:33.679 20:31:22 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:31:33.679 20:31:22 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:31:33.679 20:31:22 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:31:33.679 20:31:22 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:33.679 20:31:22 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:33.679 20:31:22 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:31:33.937 20:31:23 keyring_linux -- keyring/linux.sh@25 -- # sn=1024223484 00:31:33.937 20:31:23 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:31:33.937 20:31:23 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:31:33.937 20:31:23 keyring_linux -- keyring/linux.sh@26 -- # [[ 1024223484 == \1\0\2\4\2\2\3\4\8\4 ]] 00:31:33.937 20:31:23 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 1024223484 00:31:33.937 20:31:23 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:31:33.937 20:31:23 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:34.196 Running I/O for 1 seconds... 00:31:35.132 00:31:35.132 Latency(us) 00:31:35.132 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:35.132 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:31:35.132 nvme0n1 : 1.01 13041.16 50.94 0.00 0.00 9764.33 2517.18 19422.49 00:31:35.132 =================================================================================================================== 00:31:35.132 Total : 13041.16 50.94 0.00 0.00 9764.33 2517.18 19422.49 00:31:35.132 0 00:31:35.132 20:31:24 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:31:35.132 20:31:24 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:31:35.389 20:31:24 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:31:35.389 20:31:24 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:31:35.389 20:31:24 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:31:35.389 20:31:24 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:31:35.389 20:31:24 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:35.389 20:31:24 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:31:35.646 20:31:24 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:31:35.646 20:31:24 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:31:35.646 20:31:24 keyring_linux -- keyring/linux.sh@23 -- # return 00:31:35.646 20:31:24 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:31:35.646 20:31:24 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:31:35.646 20:31:24 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:31:35.646 20:31:24 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:31:35.646 20:31:24 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:35.646 20:31:24 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:31:35.646 20:31:24 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:35.646 20:31:24 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:31:35.646 20:31:24 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:31:35.905 [2024-07-14 20:31:24.919961] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:31:35.905 [2024-07-14 20:31:24.920647] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ff3d70 (107): Transport endpoint is not connected 00:31:35.905 [2024-07-14 20:31:24.921638] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ff3d70 (9): Bad file descriptor 00:31:35.905 [2024-07-14 20:31:24.922635] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:35.905 [2024-07-14 20:31:24.922656] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:31:35.905 [2024-07-14 20:31:24.922666] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:35.905 2024/07/14 20:31:24 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 psk::spdk-test:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:31:35.905 request: 00:31:35.905 { 00:31:35.905 "method": "bdev_nvme_attach_controller", 00:31:35.905 "params": { 00:31:35.905 "name": "nvme0", 00:31:35.905 "trtype": "tcp", 00:31:35.905 "traddr": "127.0.0.1", 00:31:35.905 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:35.905 "adrfam": "ipv4", 00:31:35.905 "trsvcid": "4420", 00:31:35.905 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:35.905 "psk": ":spdk-test:key1" 00:31:35.905 } 00:31:35.905 } 00:31:35.905 Got JSON-RPC error response 00:31:35.905 GoRPCClient: error on JSON-RPC call 00:31:35.905 20:31:24 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:31:35.905 20:31:24 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:35.905 20:31:24 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:35.905 20:31:24 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:35.905 20:31:24 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:31:35.905 20:31:24 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:31:35.905 20:31:24 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:31:35.905 20:31:24 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:31:35.905 20:31:24 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:31:35.905 20:31:24 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:31:35.905 20:31:24 keyring_linux -- keyring/linux.sh@33 -- # sn=1024223484 00:31:35.905 20:31:24 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 1024223484 00:31:35.905 1 links removed 00:31:35.905 20:31:24 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:31:35.905 20:31:24 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:31:35.905 20:31:24 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:31:35.905 20:31:24 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:31:35.905 20:31:24 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:31:35.905 20:31:24 keyring_linux -- keyring/linux.sh@33 -- # sn=292773155 00:31:35.905 20:31:24 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 292773155 00:31:35.905 1 links removed 00:31:35.905 20:31:24 keyring_linux -- keyring/linux.sh@41 -- # killprocess 119280 00:31:35.905 20:31:24 keyring_linux -- common/autotest_common.sh@946 -- # '[' -z 119280 ']' 00:31:35.905 20:31:24 keyring_linux -- common/autotest_common.sh@950 -- # kill -0 119280 00:31:35.905 20:31:24 keyring_linux -- common/autotest_common.sh@951 -- # uname 00:31:35.905 20:31:24 keyring_linux -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:35.905 20:31:24 keyring_linux -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 119280 00:31:36.163 20:31:24 keyring_linux -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:31:36.163 20:31:24 keyring_linux -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:31:36.163 20:31:24 keyring_linux -- common/autotest_common.sh@964 -- # echo 'killing process with pid 119280' 00:31:36.163 killing process with pid 119280 00:31:36.163 Received shutdown signal, test time was about 1.000000 seconds 00:31:36.163 00:31:36.163 Latency(us) 00:31:36.163 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:36.163 =================================================================================================================== 00:31:36.163 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:36.163 20:31:24 keyring_linux -- common/autotest_common.sh@965 -- # kill 119280 00:31:36.163 20:31:24 keyring_linux -- common/autotest_common.sh@970 -- # wait 119280 00:31:36.420 20:31:25 keyring_linux -- keyring/linux.sh@42 -- # killprocess 119248 00:31:36.420 20:31:25 keyring_linux -- common/autotest_common.sh@946 -- # '[' -z 119248 ']' 00:31:36.420 20:31:25 keyring_linux -- common/autotest_common.sh@950 -- # kill -0 119248 00:31:36.420 20:31:25 keyring_linux -- common/autotest_common.sh@951 -- # uname 00:31:36.420 20:31:25 keyring_linux -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:36.420 20:31:25 keyring_linux -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 119248 00:31:36.420 20:31:25 keyring_linux -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:31:36.420 20:31:25 keyring_linux -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:31:36.420 killing process with pid 119248 00:31:36.420 20:31:25 keyring_linux -- common/autotest_common.sh@964 -- # echo 'killing process with pid 119248' 00:31:36.420 20:31:25 keyring_linux -- common/autotest_common.sh@965 -- # kill 119248 00:31:36.420 20:31:25 keyring_linux -- common/autotest_common.sh@970 -- # wait 119248 00:31:36.986 00:31:36.986 real 0m6.510s 00:31:36.986 user 0m12.173s 00:31:36.986 sys 0m1.810s 00:31:36.986 20:31:25 keyring_linux -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:36.986 20:31:25 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:31:36.986 ************************************ 00:31:36.986 END TEST keyring_linux 00:31:36.986 ************************************ 00:31:36.986 20:31:25 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:31:36.986 20:31:25 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:31:36.986 20:31:25 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:31:36.986 20:31:25 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:31:36.986 20:31:25 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:31:36.986 20:31:25 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:31:36.986 20:31:25 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:31:36.986 20:31:25 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:31:36.986 20:31:25 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:31:36.986 20:31:25 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:31:36.986 20:31:25 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:31:36.986 20:31:25 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:31:36.986 20:31:25 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:31:36.986 20:31:25 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:31:36.986 20:31:25 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:31:36.986 20:31:25 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:31:36.986 20:31:25 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:31:36.986 20:31:25 -- common/autotest_common.sh@720 -- # xtrace_disable 00:31:36.986 20:31:25 -- common/autotest_common.sh@10 -- # set +x 00:31:36.986 20:31:25 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:31:36.986 20:31:25 -- common/autotest_common.sh@1388 -- # local autotest_es=0 00:31:36.986 20:31:25 -- common/autotest_common.sh@1389 -- # xtrace_disable 00:31:36.986 20:31:25 -- common/autotest_common.sh@10 -- # set +x 00:31:38.884 INFO: APP EXITING 00:31:38.884 INFO: killing all VMs 00:31:38.884 INFO: killing vhost app 00:31:38.884 INFO: EXIT DONE 00:31:39.143 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:39.401 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:31:39.401 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:31:39.968 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:39.968 Cleaning 00:31:39.968 Removing: /var/run/dpdk/spdk0/config 00:31:39.968 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:31:39.968 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:31:39.968 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:31:39.968 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:31:39.968 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:31:39.968 Removing: /var/run/dpdk/spdk0/hugepage_info 00:31:40.226 Removing: /var/run/dpdk/spdk1/config 00:31:40.227 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:31:40.227 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:31:40.227 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:31:40.227 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:31:40.227 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:31:40.227 Removing: /var/run/dpdk/spdk1/hugepage_info 00:31:40.227 Removing: /var/run/dpdk/spdk2/config 00:31:40.227 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:31:40.227 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:31:40.227 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:31:40.227 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:31:40.227 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:31:40.227 Removing: /var/run/dpdk/spdk2/hugepage_info 00:31:40.227 Removing: /var/run/dpdk/spdk3/config 00:31:40.227 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:31:40.227 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:31:40.227 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:31:40.227 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:31:40.227 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:31:40.227 Removing: /var/run/dpdk/spdk3/hugepage_info 00:31:40.227 Removing: /var/run/dpdk/spdk4/config 00:31:40.227 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:31:40.227 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:31:40.227 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:31:40.227 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:31:40.227 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:31:40.227 Removing: /var/run/dpdk/spdk4/hugepage_info 00:31:40.227 Removing: /dev/shm/nvmf_trace.0 00:31:40.227 Removing: /dev/shm/spdk_tgt_trace.pid72959 00:31:40.227 Removing: /var/run/dpdk/spdk0 00:31:40.227 Removing: /var/run/dpdk/spdk1 00:31:40.227 Removing: /var/run/dpdk/spdk2 00:31:40.227 Removing: /var/run/dpdk/spdk3 00:31:40.227 Removing: /var/run/dpdk/spdk4 00:31:40.227 Removing: /var/run/dpdk/spdk_pid100007 00:31:40.227 Removing: /var/run/dpdk/spdk_pid100271 00:31:40.227 Removing: /var/run/dpdk/spdk_pid100388 00:31:40.227 Removing: /var/run/dpdk/spdk_pid100636 00:31:40.227 Removing: /var/run/dpdk/spdk_pid100763 00:31:40.227 Removing: /var/run/dpdk/spdk_pid100898 00:31:40.227 Removing: /var/run/dpdk/spdk_pid101243 00:31:40.227 Removing: /var/run/dpdk/spdk_pid101622 00:31:40.227 Removing: /var/run/dpdk/spdk_pid101625 00:31:40.227 Removing: /var/run/dpdk/spdk_pid103851 00:31:40.227 Removing: /var/run/dpdk/spdk_pid104154 00:31:40.227 Removing: /var/run/dpdk/spdk_pid104645 00:31:40.227 Removing: /var/run/dpdk/spdk_pid104651 00:31:40.227 Removing: /var/run/dpdk/spdk_pid104987 00:31:40.227 Removing: /var/run/dpdk/spdk_pid105001 00:31:40.227 Removing: /var/run/dpdk/spdk_pid105021 00:31:40.227 Removing: /var/run/dpdk/spdk_pid105046 00:31:40.227 Removing: /var/run/dpdk/spdk_pid105052 00:31:40.227 Removing: /var/run/dpdk/spdk_pid105203 00:31:40.227 Removing: /var/run/dpdk/spdk_pid105206 00:31:40.227 Removing: /var/run/dpdk/spdk_pid105309 00:31:40.227 Removing: /var/run/dpdk/spdk_pid105311 00:31:40.227 Removing: /var/run/dpdk/spdk_pid105415 00:31:40.227 Removing: /var/run/dpdk/spdk_pid105426 00:31:40.227 Removing: /var/run/dpdk/spdk_pid105887 00:31:40.227 Removing: /var/run/dpdk/spdk_pid105930 00:31:40.227 Removing: /var/run/dpdk/spdk_pid106088 00:31:40.227 Removing: /var/run/dpdk/spdk_pid106203 00:31:40.227 Removing: /var/run/dpdk/spdk_pid106587 00:31:40.227 Removing: /var/run/dpdk/spdk_pid106837 00:31:40.227 Removing: /var/run/dpdk/spdk_pid107308 00:31:40.227 Removing: /var/run/dpdk/spdk_pid107895 00:31:40.227 Removing: /var/run/dpdk/spdk_pid109235 00:31:40.227 Removing: /var/run/dpdk/spdk_pid109827 00:31:40.227 Removing: /var/run/dpdk/spdk_pid109829 00:31:40.227 Removing: /var/run/dpdk/spdk_pid111743 00:31:40.227 Removing: /var/run/dpdk/spdk_pid111834 00:31:40.227 Removing: /var/run/dpdk/spdk_pid111931 00:31:40.227 Removing: /var/run/dpdk/spdk_pid112016 00:31:40.227 Removing: /var/run/dpdk/spdk_pid112174 00:31:40.227 Removing: /var/run/dpdk/spdk_pid112265 00:31:40.227 Removing: /var/run/dpdk/spdk_pid112350 00:31:40.227 Removing: /var/run/dpdk/spdk_pid112435 00:31:40.227 Removing: /var/run/dpdk/spdk_pid112775 00:31:40.227 Removing: /var/run/dpdk/spdk_pid113457 00:31:40.227 Removing: /var/run/dpdk/spdk_pid114789 00:31:40.485 Removing: /var/run/dpdk/spdk_pid114975 00:31:40.485 Removing: /var/run/dpdk/spdk_pid115256 00:31:40.485 Removing: /var/run/dpdk/spdk_pid115552 00:31:40.485 Removing: /var/run/dpdk/spdk_pid116098 00:31:40.485 Removing: /var/run/dpdk/spdk_pid116103 00:31:40.485 Removing: /var/run/dpdk/spdk_pid116463 00:31:40.485 Removing: /var/run/dpdk/spdk_pid116622 00:31:40.485 Removing: /var/run/dpdk/spdk_pid116774 00:31:40.485 Removing: /var/run/dpdk/spdk_pid116871 00:31:40.485 Removing: /var/run/dpdk/spdk_pid117021 00:31:40.485 Removing: /var/run/dpdk/spdk_pid117129 00:31:40.485 Removing: /var/run/dpdk/spdk_pid117795 00:31:40.485 Removing: /var/run/dpdk/spdk_pid117825 00:31:40.485 Removing: /var/run/dpdk/spdk_pid117860 00:31:40.485 Removing: /var/run/dpdk/spdk_pid118108 00:31:40.485 Removing: /var/run/dpdk/spdk_pid118143 00:31:40.485 Removing: /var/run/dpdk/spdk_pid118174 00:31:40.485 Removing: /var/run/dpdk/spdk_pid118599 00:31:40.485 Removing: /var/run/dpdk/spdk_pid118634 00:31:40.485 Removing: /var/run/dpdk/spdk_pid119095 00:31:40.485 Removing: /var/run/dpdk/spdk_pid119248 00:31:40.485 Removing: /var/run/dpdk/spdk_pid119280 00:31:40.485 Removing: /var/run/dpdk/spdk_pid72814 00:31:40.485 Removing: /var/run/dpdk/spdk_pid72959 00:31:40.485 Removing: /var/run/dpdk/spdk_pid73220 00:31:40.485 Removing: /var/run/dpdk/spdk_pid73312 00:31:40.485 Removing: /var/run/dpdk/spdk_pid73352 00:31:40.485 Removing: /var/run/dpdk/spdk_pid73460 00:31:40.485 Removing: /var/run/dpdk/spdk_pid73491 00:31:40.485 Removing: /var/run/dpdk/spdk_pid73614 00:31:40.485 Removing: /var/run/dpdk/spdk_pid73884 00:31:40.485 Removing: /var/run/dpdk/spdk_pid74054 00:31:40.485 Removing: /var/run/dpdk/spdk_pid74130 00:31:40.485 Removing: /var/run/dpdk/spdk_pid74222 00:31:40.485 Removing: /var/run/dpdk/spdk_pid74316 00:31:40.485 Removing: /var/run/dpdk/spdk_pid74350 00:31:40.485 Removing: /var/run/dpdk/spdk_pid74380 00:31:40.485 Removing: /var/run/dpdk/spdk_pid74442 00:31:40.485 Removing: /var/run/dpdk/spdk_pid74559 00:31:40.485 Removing: /var/run/dpdk/spdk_pid75188 00:31:40.485 Removing: /var/run/dpdk/spdk_pid75252 00:31:40.485 Removing: /var/run/dpdk/spdk_pid75321 00:31:40.485 Removing: /var/run/dpdk/spdk_pid75349 00:31:40.485 Removing: /var/run/dpdk/spdk_pid75431 00:31:40.485 Removing: /var/run/dpdk/spdk_pid75440 00:31:40.485 Removing: /var/run/dpdk/spdk_pid75519 00:31:40.485 Removing: /var/run/dpdk/spdk_pid75547 00:31:40.485 Removing: /var/run/dpdk/spdk_pid75604 00:31:40.485 Removing: /var/run/dpdk/spdk_pid75634 00:31:40.485 Removing: /var/run/dpdk/spdk_pid75680 00:31:40.485 Removing: /var/run/dpdk/spdk_pid75710 00:31:40.485 Removing: /var/run/dpdk/spdk_pid75862 00:31:40.485 Removing: /var/run/dpdk/spdk_pid75892 00:31:40.485 Removing: /var/run/dpdk/spdk_pid75968 00:31:40.485 Removing: /var/run/dpdk/spdk_pid76036 00:31:40.486 Removing: /var/run/dpdk/spdk_pid76066 00:31:40.486 Removing: /var/run/dpdk/spdk_pid76119 00:31:40.486 Removing: /var/run/dpdk/spdk_pid76159 00:31:40.486 Removing: /var/run/dpdk/spdk_pid76188 00:31:40.486 Removing: /var/run/dpdk/spdk_pid76223 00:31:40.486 Removing: /var/run/dpdk/spdk_pid76257 00:31:40.486 Removing: /var/run/dpdk/spdk_pid76292 00:31:40.486 Removing: /var/run/dpdk/spdk_pid76326 00:31:40.486 Removing: /var/run/dpdk/spdk_pid76361 00:31:40.486 Removing: /var/run/dpdk/spdk_pid76397 00:31:40.486 Removing: /var/run/dpdk/spdk_pid76432 00:31:40.486 Removing: /var/run/dpdk/spdk_pid76465 00:31:40.486 Removing: /var/run/dpdk/spdk_pid76501 00:31:40.486 Removing: /var/run/dpdk/spdk_pid76530 00:31:40.486 Removing: /var/run/dpdk/spdk_pid76570 00:31:40.486 Removing: /var/run/dpdk/spdk_pid76599 00:31:40.486 Removing: /var/run/dpdk/spdk_pid76639 00:31:40.486 Removing: /var/run/dpdk/spdk_pid76668 00:31:40.486 Removing: /var/run/dpdk/spdk_pid76711 00:31:40.486 Removing: /var/run/dpdk/spdk_pid76743 00:31:40.486 Removing: /var/run/dpdk/spdk_pid76783 00:31:40.486 Removing: /var/run/dpdk/spdk_pid76813 00:31:40.486 Removing: /var/run/dpdk/spdk_pid76883 00:31:40.486 Removing: /var/run/dpdk/spdk_pid76994 00:31:40.486 Removing: /var/run/dpdk/spdk_pid77406 00:31:40.486 Removing: /var/run/dpdk/spdk_pid84145 00:31:40.486 Removing: /var/run/dpdk/spdk_pid84493 00:31:40.486 Removing: /var/run/dpdk/spdk_pid86920 00:31:40.486 Removing: /var/run/dpdk/spdk_pid87299 00:31:40.486 Removing: /var/run/dpdk/spdk_pid87554 00:31:40.486 Removing: /var/run/dpdk/spdk_pid87606 00:31:40.744 Removing: /var/run/dpdk/spdk_pid88466 00:31:40.744 Removing: /var/run/dpdk/spdk_pid88515 00:31:40.744 Removing: /var/run/dpdk/spdk_pid88872 00:31:40.744 Removing: /var/run/dpdk/spdk_pid89403 00:31:40.744 Removing: /var/run/dpdk/spdk_pid89842 00:31:40.744 Removing: /var/run/dpdk/spdk_pid90822 00:31:40.744 Removing: /var/run/dpdk/spdk_pid91797 00:31:40.744 Removing: /var/run/dpdk/spdk_pid91912 00:31:40.744 Removing: /var/run/dpdk/spdk_pid91983 00:31:40.744 Removing: /var/run/dpdk/spdk_pid93453 00:31:40.744 Removing: /var/run/dpdk/spdk_pid93676 00:31:40.744 Removing: /var/run/dpdk/spdk_pid98881 00:31:40.744 Removing: /var/run/dpdk/spdk_pid99314 00:31:40.744 Removing: /var/run/dpdk/spdk_pid99418 00:31:40.744 Removing: /var/run/dpdk/spdk_pid99564 00:31:40.744 Removing: /var/run/dpdk/spdk_pid99610 00:31:40.744 Removing: /var/run/dpdk/spdk_pid99650 00:31:40.744 Removing: /var/run/dpdk/spdk_pid99700 00:31:40.744 Removing: /var/run/dpdk/spdk_pid99854 00:31:40.744 Clean 00:31:40.744 20:31:29 -- common/autotest_common.sh@1447 -- # return 0 00:31:40.744 20:31:29 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:31:40.744 20:31:29 -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:40.744 20:31:29 -- common/autotest_common.sh@10 -- # set +x 00:31:40.744 20:31:29 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:31:40.744 20:31:29 -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:40.744 20:31:29 -- common/autotest_common.sh@10 -- # set +x 00:31:40.744 20:31:29 -- spdk/autotest.sh@387 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:31:40.744 20:31:29 -- spdk/autotest.sh@389 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:31:40.744 20:31:29 -- spdk/autotest.sh@389 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:31:40.744 20:31:29 -- spdk/autotest.sh@391 -- # hash lcov 00:31:40.744 20:31:29 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:31:40.744 20:31:29 -- spdk/autotest.sh@393 -- # hostname 00:31:40.744 20:31:29 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:31:41.002 geninfo: WARNING: invalid characters removed from testname! 00:32:07.542 20:31:52 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:07.542 20:31:55 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:09.444 20:31:58 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:11.349 20:32:00 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:13.879 20:32:02 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:15.778 20:32:04 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:18.305 20:32:06 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:32:18.305 20:32:07 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:18.305 20:32:07 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:32:18.305 20:32:07 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:18.305 20:32:07 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:18.305 20:32:07 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:18.305 20:32:07 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:18.305 20:32:07 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:18.305 20:32:07 -- paths/export.sh@5 -- $ export PATH 00:32:18.305 20:32:07 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:18.305 20:32:07 -- common/autobuild_common.sh@436 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:32:18.305 20:32:07 -- common/autobuild_common.sh@437 -- $ date +%s 00:32:18.305 20:32:07 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1720989127.XXXXXX 00:32:18.305 20:32:07 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1720989127.iw38iq 00:32:18.305 20:32:07 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:32:18.305 20:32:07 -- common/autobuild_common.sh@443 -- $ '[' -n v23.11 ']' 00:32:18.305 20:32:07 -- common/autobuild_common.sh@444 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:32:18.305 20:32:07 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:32:18.305 20:32:07 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:32:18.305 20:32:07 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:32:18.305 20:32:07 -- common/autobuild_common.sh@453 -- $ get_config_params 00:32:18.305 20:32:07 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:32:18.305 20:32:07 -- common/autotest_common.sh@10 -- $ set +x 00:32:18.305 20:32:07 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang' 00:32:18.305 20:32:07 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:32:18.305 20:32:07 -- pm/common@17 -- $ local monitor 00:32:18.305 20:32:07 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:18.305 20:32:07 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:18.305 20:32:07 -- pm/common@25 -- $ sleep 1 00:32:18.305 20:32:07 -- pm/common@21 -- $ date +%s 00:32:18.305 20:32:07 -- pm/common@21 -- $ date +%s 00:32:18.305 20:32:07 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1720989127 00:32:18.305 20:32:07 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1720989127 00:32:18.305 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1720989127_collect-vmstat.pm.log 00:32:18.305 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1720989127_collect-cpu-load.pm.log 00:32:19.237 20:32:08 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:32:19.237 20:32:08 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:32:19.237 20:32:08 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:32:19.237 20:32:08 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:32:19.237 20:32:08 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:32:19.237 20:32:08 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:32:19.237 20:32:08 -- spdk/autopackage.sh@19 -- $ timing_finish 00:32:19.237 20:32:08 -- common/autotest_common.sh@732 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:32:19.237 20:32:08 -- common/autotest_common.sh@733 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:32:19.237 20:32:08 -- common/autotest_common.sh@735 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:32:19.237 20:32:08 -- spdk/autopackage.sh@20 -- $ exit 0 00:32:19.237 20:32:08 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:32:19.237 20:32:08 -- pm/common@29 -- $ signal_monitor_resources TERM 00:32:19.237 20:32:08 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:32:19.237 20:32:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:19.237 20:32:08 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:32:19.237 20:32:08 -- pm/common@44 -- $ pid=121010 00:32:19.237 20:32:08 -- pm/common@50 -- $ kill -TERM 121010 00:32:19.237 20:32:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:19.237 20:32:08 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:32:19.237 20:32:08 -- pm/common@44 -- $ pid=121012 00:32:19.237 20:32:08 -- pm/common@50 -- $ kill -TERM 121012 00:32:19.237 + [[ -n 5900 ]] 00:32:19.237 + sudo kill 5900 00:32:19.246 [Pipeline] } 00:32:19.261 [Pipeline] // timeout 00:32:19.265 [Pipeline] } 00:32:19.276 [Pipeline] // stage 00:32:19.280 [Pipeline] } 00:32:19.291 [Pipeline] // catchError 00:32:19.297 [Pipeline] stage 00:32:19.299 [Pipeline] { (Stop VM) 00:32:19.309 [Pipeline] sh 00:32:19.584 + vagrant halt 00:32:22.112 ==> default: Halting domain... 00:32:28.697 [Pipeline] sh 00:32:28.975 + vagrant destroy -f 00:32:31.507 ==> default: Removing domain... 00:32:31.523 [Pipeline] sh 00:32:31.807 + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest/output 00:32:31.817 [Pipeline] } 00:32:31.841 [Pipeline] // stage 00:32:31.847 [Pipeline] } 00:32:31.869 [Pipeline] // dir 00:32:31.875 [Pipeline] } 00:32:31.897 [Pipeline] // wrap 00:32:31.903 [Pipeline] } 00:32:31.923 [Pipeline] // catchError 00:32:31.934 [Pipeline] stage 00:32:31.937 [Pipeline] { (Epilogue) 00:32:31.956 [Pipeline] sh 00:32:32.242 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:32:37.519 [Pipeline] catchError 00:32:37.521 [Pipeline] { 00:32:37.536 [Pipeline] sh 00:32:37.817 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:32:38.075 Artifacts sizes are good 00:32:38.084 [Pipeline] } 00:32:38.103 [Pipeline] // catchError 00:32:38.114 [Pipeline] archiveArtifacts 00:32:38.122 Archiving artifacts 00:32:38.306 [Pipeline] cleanWs 00:32:38.323 [WS-CLEANUP] Deleting project workspace... 00:32:38.323 [WS-CLEANUP] Deferred wipeout is used... 00:32:38.349 [WS-CLEANUP] done 00:32:38.351 [Pipeline] } 00:32:38.369 [Pipeline] // stage 00:32:38.374 [Pipeline] } 00:32:38.391 [Pipeline] // node 00:32:38.397 [Pipeline] End of Pipeline 00:32:38.434 Finished: SUCCESS